Galaxy Charter is aimed to solve poor task management in a digital marketing workplace, making task allotment more transparent and improve employee performance. Target users are those who work for large digital marketing agencies, who are often not assigned specific tasks based on their own skill set and have no emotional engagement to projects. Our project is an interactive galaxy wall to help task allotment for employees. Stars represent tasks available for the employees. Tasks are allocated by the skill set required. A dispenser outputs physical tokens representing tasks. Once the right employee walks across the wall, the star will follow that person. Our design allows users to view the task details by raising the index finger. And then grab a task star, drag it around, and move it to the dispenser, which means the user accepts the task. Once the task is done, users can touch the token back on the wall, which will turn their tokens into stars in the galaxy. Other users can see completed tasks and select and available tasks to work on.
Technical Description
Physical Form:
The Galaxy Charter takes the physical form of a constructed fake
wall made of corflute board, which is attached to a repurposed
wooden frame on wheels. A digital projection is then displayed onto
the wall using a short throw projector. Plastic balls are used as a
physical token which is representative of the task stars that are
dispensed through a cardboard tube after users accept tasks.
Python-OpenCV and Webcam:
Two interaction methods were decided as core to the development of
the concept. The first is side-facial recognition, which allows for
the digital stars to track users’ movements. The second is hand
gesture detection and control which allows users to interact with
the adaptive surface. Python-OpenCV code, in conjunction with a
webcam, is implemented to achieve these desired functionalities. The
webcam captures users in the frame of view, and the code detects
users’ left side face and draws a bounding box around it in the
webcam capture. A user’s general position and movement is tracked
based on a coordinate of the box, and data is sent to Unity.
Likewise, Python-OpenCV code is used for hand detection and gesture
control. The code utilises an installed framework called MediaPipe,
which allows for simple detection of objects, like hands. The
implemented code sends data to Unity of detected gestures and hand
positions, e.g., a coordinate point of the bounding box of the
user’s hand when doing a grab gesture. Gestures are determined and
detected according to the number of fingers the user is holding up.
Data is sent to Unity through server sockets.
Unity:
The digital user interface has been developed on Unity. This is what
is projected onto the wall that users interact with. It shows the
galaxy-themed display, which is formed using a Unity space asset,
and our own created digital assets. It is mostly controlled by the
received data from the Python code, according to users’ movements
and interactions. Simulated controls have been implemented for when
the interactive wall activates/inactivates when users draw
near/away, going from a blank white display to the galaxy interface.
This is done through keyboard controls. Other functionalities have
also been simulated to allow for backup controls via mouse and
keyboard controls in case of any technical difficulties with the
intended physical interactions. These include, for moving stars to
follow the user, triggering the task description popup, and
triggering the ball release upon accepting tasks.
Arduino Uno and Servo Motor:
An Arduino Uno and servo motor is used to automate the process of
dispensing balls representative of tasks to users. The servo motor
acts as the ball release mechanism, rotating to release a ball when
a task is accepted. This function is triggered when a star is
accepted by users through the grab gesture and dragging interaction
with the Unity interface, where users drag the star to the right
side of the wall towards the dispensing mechanism.
Final Statement
Exhibit Experience:
Our overall exhibit experience was positive. We did encounter
technical difficulties at the beginning, due to laptop battery
issues causing our Arduino IDE to error, but this was quickly
resolved. We did do some last minute changes, like covering the back
of our Galaxy Charter wall with black sheet to make it look more
presentable. We also displayed some visual cues of the gestures to
aid in our explanations and our marquee display. The
activate/deactivate part of the interaction to fade in/out the UI
was also disregarded. Since this was simulated, it became tedious,
and also didn’t necessarily add or take from the experience. All in
all, we believe it was a success. The more practice we got with our
explanations, the more we improved. What we found surprising was
displaying our back-end Python-OpenCV webcam capture of the face and
gesture tracking. This was unplanned, but due to our setup it was
visible to the public on the laptop. It gave people a “behind the
scenes” look which became a good way to engage guests. Overall,
though a long and exhausting day, it was a great experience finally
showcasing our work to the public and our peers.
Public Response:
There were many positive feedback and responses from the exhibit.
Although there were sometimes minor technical/calibration issues,
the overall concept and experience were still successful and
enjoyable for the visitors. Most visitors were fascinated with the
hand gesture interaction within our concept and were also interested
with the implementation. The dispensing mechanism was also widely
applauded as it is a very good addition that complements the design
and concept. There were also visitors that found the concept closely
related to their current work environment hence enabling the
engagement of more in-depth level of discussion on the potential
application in current workspaces and future possibilities.
Next Steps:
Our next steps will be to improve the gesture interactions so that
they run more smoothly, since there have been challenges when users
didn't perform the gestures precisely enough for the webcam and
software to detect. Additionally, the entire visual design will be
iterated to make it more aesthetically appealing and practical at
the same time, as well as having greater visual continuity with the
main design and popup components. More interactions will be
incorporated in our design, such as multiple user interactivity, and
each ball that symbolises a task varying in size depending on task
complexity. For example, a larger ball for a difficult task and a
smaller ball for a simple task. Furthermore, after the user accepts
the assignment and receives the ball, they may return to their
workplace and connect it to a device that will provide them with all
of the project's specifics, and at last users should then be able to
return to the wall and interact with it after finishing the task to
add a completed task star to a constellation of completed tasks that
can be seen and interacted with.