poseNet, machine learning, embodied interaction
Using poseNet, a pose estimation neural network in a p5.js sketch, I divided an existing illustration into parts and uploaded these parts to their matching positions on my body, in accordance with the corresponding keypoints in the algorithm.
Can the body perform an illustration? In doing so, does embodied illustration become possible?
Further paths for this project include an installation where the viewer engages with the sketch and plays with it. Another pathway is to train the pose estimation model for classification and inviting the viewer to mimic certain poses in correspondence with their matching illustrations.
I was inspired by the blurring boundaries between physical and digital with the emergence of interactive arts technologies for this project and intend to take it further through AR applications.
How do I become a mountain with static medium?
How do illustrations become a mountain?
How does the machine become a mountain?
How do I become a mountain with embodied interaction?
poseNet on p5.js sketch
Designed as a follow-up project on the series of illustrations titled 'Becoming a Mountain', 'Embodied Illustration' focuses on performativity of visual media through neural network and real-time interaction.