Everyday embodiments

The embodiment is a great way to increase focus, decrease stress, and stimulate your creativity. The embodiment is a path of awakening that views the body as the doorway, not the obstacle, to personal growth and spiritual transformation.  Through combining the orientation of Western somatic therapy with Eastern meditational practices, Embodiment Training leads you to the discovery of your natural, embodied state …

Well, you don’t have to be new age guru, nor werewolf to experience an embodiment. In fact, it happens to you every day. Let me give you some examples.


“The mouse was just a tiny piece of a much larger project, aimed at augmenting human intellect.”  — Doug Engelbart, inventor of the mouse

You have may never think about it, but one of the most common embodiments is your computer cursor. Yes, that little arrow which allows you to interact with digital reality of the computer 2d screen. Although it may appear that it is still your body that drives the interaction with a computer (through the negligible moves of your forearm and fingers), it is not true. If you want to enter the 2D world of the screen space, you need to utilize that pointy guy who happens to live there.

Let’s take a more in-depth look what’s going on there: Obviously, our physical body is not capable of direct interactions with zeros and ones, nor with their graphical representations. However, when we map our hand through the computer mouse to the moves of the cursor, and our eyes to follow it on display, we can do so. It is still our mind that drives it, but it is embodied into the cursor, while the immediate feeling of our physical body temporary disappears from the current reality and becomes obsolete. Just think about it for a moment: Do you say “I move my finger” or “I click at icon” when you want to interact with a computer?


"Cars are not a suit of clothes; cars are an avatar. Cars are an expansion of yourself: they take your thoughts, your ideas, your emotions, and they multiply it -- your anger, whatever. It's an avatar." — Chris Bangle: Great cars are great art

And how about this example: Can you run at 130km/h, carry 400kg of bags and take another four persons along with you? Apparently, our bodies can’t do that. Unless we use the car. Unless we embody into the car. Of course, one can say that car is just a machine that we control with our hands on the steering wheel, with our feet on the pedals and eyes looking at the road at the front of us. Well, isn’t it the same description as the one we’ve used when we talked about the computer cursor? Are you deeply and consciously aware of your physical body when you drive?  Instead of it, we feel the road we drive on, all the vibrations. We accommodate our visual perception to the speed, and we hardly think about what our body does with the car. Our body is temporarily replaced with the body of the car, while our senses and locomotion apparatus is mapped to the modalities of the car.

“For real human beings, the only realism is an embodied realism.”
― George Lakoff

I could also use an example of a bicycle riding. Or skiing. In all cases, we don’t consciously think about what our body parts do. Instead, we think about it as bike-riding, skiing, driving and so on. Neuroscientist David Eagleman describes this phenomenon in this book “Incognito”: We learn to use our body as newborn babies, and we turn interactions between our body and mind into autonomous ”zombie” processes. The same thing happens later when we learn to ride a bicycle or to drive a car. We only think about our physical body during the learning phase, until these skills become zombie processes.

In fact, we experience the embodiment in many forms every day. Those various embodiments allow our mind to experience the realities beyond the capabilities of our physical body.  Body, so to say, is a representation of the reality. Although some philosophers may complain that body rather defines than reflexes the reality, it has no importance in one specific case: Virtual reality. As there are no rules, nor laws that pre-define it, our embodiments are both reflection and definition at the same time. If we consider that the embodiment is a product of mapping new modalities to our sensory and motion system and that we can freely define this reality, what would be the perfect VR embodiment?

Feel free follow the conversation about this article at LinkedIn:

References and related reading

Walk it off

Screen Shot 2017-11-24 at 08.41.42

In 1985, GM made a decision that has changed the way we design cars (And anything else in general.) On that day, they've signed a contract with Alias Systems to develop NURBS modeling technology compatible with their current CAD tools. Just three years after, the software piece called Alias/2 became the substantial part of design process among the most of the industry leaders, including brands such as Honda, Volvo, BMW, ILM, Apple or Sony. Computer Aided Industrial Design was born and since that day digital modeling and visualization anchored themselves in the development of the human-made products, alongside to traditional model making with wood or clay.

It was natural to expect that digital modeling would replace the clay modeling. As nobody uses mechanical typewriters to write, except few pathetic hipsters probably, it seemed to be inevitable. What is interesting though, even after many attempts to fully digitalize the design process, it has never entirely happened. Despite today's democratized access to computers, despite all the high-resolution-room-sized screens or VR lately, the clay never disappeared from the process. It seems that when it comes to full-size models, the choice is clear and it is the clay, not the computer screen.

There is a number of reasons why it would make sense to avoid clay modeling. First of all, it takes too much time to build model manually, and even NC milling the digital model out might be not as direct as it appears. Changes on sculpted surfaces are a relatively easy task, but they have to be scanned and re-surfaced with CAID tool, as the rest of the process is digital. Additionally, working with the industrial clay requires specific conditions, such as well ventilated room as it may contain sulfur.

So, why are we still using clay? Is it just a pathetic choice made by prominent chief designers? Or is it that one joy of pushing the model to the sunlight? I guess everyone who has ever experienced it, would confirm that sheer satisfaction of walking around the model outside the modeling hall, but that won't serve as a convincing reason to spend so much time and money on clay modeling.

There is no clear answer for that. In an example, it is tough to accept Chris Svensson's opinion, (He is the director of design for Ford’s North and South American operations) which he shared with the Wall Street Journal in 2014: ‘We always came back to clay.’ The problem is, he says, digital projections can’t accurately show how light will play on a car’s surface. ‘You can’t replicate the sun.’ While it may sound just about right, it is far from the truth. Today's digital tools are much more precise in its way of controlling and evaluating highlights than anything we know in the analog world. As of today, we can simulate pretty much any lighting scenario, and support the visual fidelity with physically correct shaders and materials. We can even present design models in the virtual reality where we can see them in their real size and observe them from any angle as we turn the camera view in space at the front of us. Still, it fails to deliver enough stimuli to judge and evaluate the forms accurately. Nevertheless, he succeeded in pushing me in a right direction. If we can replicate the sun, what else we may need to replicate "real" visual experience with digital simulation?

It appears that we can generate digital content that is convincing enough to satisfy our eyes. Yet, there is much more to visual perception than just our eyesight. Neuroscientist Anil Seth reveals the truth in his speech “Neuroscience of consciousness“: "What we consciously see, is our brain's best guess of the causes of its sensory inputs." David Eagleman from Stanford University adds: "Our brain continually creates a visual model of the outside world refined by our eyesight and combined with proprioception." Just add to it, he also claims (in his book "Incognito") that we don't even see fully in 3d, but we instead of it calculate our three-dimensional mental image using the different viewing angles generated by the offset of our eyes, our head orientation and our body movement in space. By the way, this theory also explains why some people with a one eye injury are still capable of perceiving the depth.

So what does it mean? And does it have anything to do with our case of clay modeling? In fact, we have a lot to consider! The thing is, that whenever we walk around the observed object, we are adding additional information needed for the better perception. As we tilt our heads, as we walk around it, we continuously improve our inner mental image with new viewing angles. At the same time, our brain uses all the senses of our own body, our height, the length of our arms and also our proprioception to refine our mental judgment of the size and proportions of the object. Our brain also compares the object with another object around, especially with objects of known size and proportions: such as the human figures. All of the above-mentioned inputs improve the way we interpret the observed object. If any of it is missing in current observation experience, the brain’s guess is incomplete.

We can unmistakably produce hyper-realistic, highly detailed digital visualization of the digital object, we can display it as a stereoscopic projection, but when we skip our physical body from such experience, the visual perception becomes far from being complete. At the same time, the clay model pushed to the sunlight, despite all its imperfections, will provide much more information about itself than any virtual reality immersion, if we can’t walk around it if we can’t use our body to complement our vision.

Do I suggest that we just need to implement the walking system into VR and achieve that required visual fidelity? I would say yes, but there is another path to reach the same goal. Our ability to correctly evaluated observed objects could be trained. The only problem is that it may take long years of clay modeler's or designer's practice, to earn an ability to see that skillfully. It is the skill that everyone can learn, as babies do learn to recognize the faces or to understand colors. Although, it may take years. So for the rest of us, we have to walk it off.

You can also follow the conversation about this article at LinkedIn:

References and related reading

VR tribute: Lebbeus Woods

Lebbeus Woods, one of my favorite architects, inspires me not only with his visual style and technique, but mainly with his unique opinion on the way we design and perceive buildings. Among the other, his manifesto "Radical reconstruction" is a must read for any concept designer and futurist. As it is not a long reading, it shouldn't be any challenge. Winking

Following images were all rendered with Keyshot and edited with Photoshop. The models were made in VR using GravitySketchVR.


VR: Singularity

VR artifact inspired by the Transhumanism theories.
Mind-Machine interface at the neural level visualized with audio-reactive brushes in Google's Tiltbrush.
Download the
TiltBrush file, pump up the music and watch neurons firing up!

Video for album "Transhuman" by Zayaz

Screenshots from the Tiltbrush application in VR:

Neural part on Sketchfab.com:

VR tribute: SYD MEAD

Syd Mead (born July 18, 1933), is a "visual futurist" and a neofuturistic concept artist. He is best known for his designs for science-fiction films such as Blade Runner, Aliens and Tron. Of his work, Mead was once moved to comment: "I've called science fiction 'reality ahead of schedule.'" (End of citation, source: Wikipedia).
With no doubts, Syd has definitely motivated the whole generation of designers and his influence is far from fading away even today. Along the Star Wars, his work was the reason for most of the designers of my age to dream about an art career. We all wanted to draw spaceships and robots.

I’ve met Syd in Montreal, during the ADAPT 2007 conference, where he held an amazing speech about his career, about his process and where he also signed an event catalogue for me. Yes, I came late to his booth to purchase a book. They were all sold out meanwhile. However, meeting my hero changed totally my life. Syd was not a myth anymore, he was a friendly living person I could talk to. Suddenly, his inspiration went from “dream” to “goal”. By the way, I’ve also attended Ian Mc Caig’s class that week and it was exactly the same experience. And while their skills could remain out of my reach forever, they both are holding the flag for me and illuminate my path.
Although there is undoubtable and visible Syd’s influence on my work, I’ve never attempted to copy or mimic his style. So when the article at Core77 (
http://www.core77.com/posts/20127/flotspotting-michael-jelineks-mead-esque-transportation-designs-20127), I was horrified. Did they just accused me from plagiarism? Well, I must admit that it took me a while to realize that it was the best compliment I could ever earn. Just see for yourselves. Yes, I am still flattered.

Brushes, gouache, pens and markers ... these are Syd’s weapons, and as they may appear too traditional or old school, his style is definitely current and futuristic. On the other side, my tools are digital and experimental. So here comes my desperate attempt to say thank you Syd, steal bit of your style and draw it my way. In virtual reality.
Bellow are few thumbnails and moving gif’s from the process, and at the bottom you will find full 3d sketch hosted at Sketchfab, which you can view with your VR headset and experience the space. Enjoy and leave me a note!


Animated GIF’s from the process, posted on Tumblr:


3d sketch on Sketchfab.com:

From Tiltbrush to 360 panorama


Tilbrush by Google is an amazing artistic tool. Unfortunately, there is only one way to fully admire and enjoy the artwork. Viewer has to dive into Tiltbrush application ... which somehow limits your audience to HTC Vive owners only. Yes, there are ways to share your art via Sketchfab or through the 360 stereoscopic video on Youtube - if you have enough bandwidth and CPU power. So what if you want to consume such art on a mobile device? Here is a little hack which will allow you to export your creations as 360 panoramas and share them on Facebook in example.

Setting up your scene and exporting the 360 video

Before we start, make sure that you read the Tiltbrush release notes (link). The process of rendering the 360 images and movies is not really for dummies and requires bit of computer knowledge.

First of all, find a good spot in your Tiltbrush scene, set up the scale and save your Tilt sketch. After I’ve done that, I’ve renamed the file and copied it to new folder at C:\TILT. (This is just to keep the things simple and organized.)

In the next step you should locate your TiltBrush.exe file. The best way to do so is to use the Steam application where you can browse files associated with Tiltbrush. Mine was located here: C:\"Program Files (x86)"\Steam\steamapps\common\"Tilt Brush”.

Now open the Command Prompt and paste following lines:

cd C:\"Program Files (x86)"\Steam\steamapps\common\"Tilt Brush"
... or your path
and then

TiltBrush.exe --captureOds --numFrames 10 C:\TILT\NoMoon.tilt
... or replace the name of the Tiltsketch with your name and path

Don’t forget to press
enter each time you paste the commands Winking


Tilt brush boots up and loads the sketch. After some time (Depending on the number of frames) the windows closes and you will find your movie in the Documents/Tilt Brush/VRVideos. Although we have an mp4 VR video, we are not going to use it. We are interested in the files hidden in the folder of the same name as our project.

Note to number of the frames: I’ve chosen 10, as I’ve used some animated brushes and I wanted to pick the best look. If you don’t want to wait, just change the number of frames to 1 (set
--numFrames 1) before you start generating the movie. Your command line should look something like this: TiltBrush.exe --captureOds --numFrames 1 C:\TILT\NoMoon.tilt




Crop and adjust

As Facebook does not support stereoscopic panoramas (Yet?) we have to crop the selected frame down to 2:1 proportion. Open up Photoshop and create new 4096x2048 image. Then simple paste (drag and drop) the PNG file into it. If everything goes right, the frame should be perfectly aligned and there is no need for any adjustments. In other cases position carefully the frame - newer version of Photoshop should automatically snap to the edges and the center. You can also crop the PNG manually, just make sure that the ratio will be exactly 2:1.

Once you are done, feel free to adjust your colors and levels, you can even paint in it Winking Save it as





I am quite happy with the free online EXIF injector (http://www.thexifer.net/#exif-general). But feel free to use any alternative. There are tons of these tools available.

There are basically only two changes required to force Facebook to accept your image as 360 panorama: Just set Make to "RICOH" and Model to "RICOH THETA S"

Once you are done, don’t forget to download your injected file. See the process bellow.


Testing and publishing

In order to preview your file locally, I am using the free panorama viewer: http://www.fsoft.it/FSPViewer/download/. If you want to test in Facebook before anyone else see it, just set the privacy to “only myself” and upload your image as a regular photo. Facebook should accept it as 360 panorama.


Now you are ready to go! Don’t forget to share your creations with me and leave me a comment bellow!

VR tribute: Kamil Lhotak

In this episode of my VR Tributes, I would love to introduce you Kamil Lhotak, (25 July 1912 Prague – 22 October 1990, Prague), the Czech painter, graphic artist, and illustrator. He was one of the members of Group 42. He was especially fond of the works of Jules Verne, Edouard Riou, and Henri de Toulouse-Lautrec, and his early artworks were also influenced by modern technical inventions; he often created drawings of cars, motorcycles, and bicycles.

It was the collaboration with writer
Adolf Branald which had made a big impression on me when I was as a kid. Together they created a book Dědeček automobile (Grandfather Automobile). Lhoták later considered his illustrations for this book as his best works. A year later, in 1955, they cooperated on the production of the film Dědeček automobile. (Wikipedia)

I loved (And still do) the mood of his artwork. Airplanes, cars, and motorcycles peacefully rest at green grass of airports, creating an incredible feel of a quietness and solitude. All those machines were adored and loved, and that much appealed to that little boy, who I still feel to be Winking

Screenshots from the Tiltbrush application in VR:


Animated GIF’s from the process, posted on Tumblr:


3d sketch on Sketchfab.com:

360 panorama hosted on Facebook:

VR Tape drawing


Once famous “Piano di Forma”, so loved by Giorgetto Giugiaro, this Front/Side/Top view design drawing, is best in real scale. Creating a such a drawing is one of the oldest automotive design techniques, yet it remains an essential part of the process at most of the car styling centers. We call it full-size tape drawing. Used for both exterior and interior, it allows the designer to create full size drawing as a black and white outlines, which gives them an accurate sense of its proportions. Such drawings are used further in the process for clay modeling and digital sculpting.

Image on the top: The tape drawing at Bentley: https://www.youtube.com/watch?v=PuZJO2jGGe0

One of the advantages of tape drawing is the possibility to step away from the board, find another perspective and let our brain to evaluate the shapes and proportions from multiple points of view. Until now, such a physical interaction with designed objects was nearly impossible when using computers. Today, thanks to VR (HTC Vive) and applications such as Gravity Sketch, we can successfully re-create this process digitally. One of the advantages is the output of 3d CAD data and also an unlimited size of objects.

So here is the very first case study in this direction that I’ve made in VR with Gravity Sketch. I took the technical package data (Originally imported into Autodesk Alias) in the OBJ format and imported it into the application. Although you have to rotate your model by 90 degrees around the X-axis, because Gravity Sketch uses Y-up coordination system, the process of tape drawing itself is a piece of cake. There are few choices of tools, from freehand strokes to bezier-like splines, few types of stroke shapes, symmetry, and some other sculpting and modeling tools. I’ve ended up with basic round curve built point to point with mirror symmetry on. With no need to utilize too much of a UI, the “taping” is very intuitive, and yes, very enjoyable. As you get closer to the center line, points automatically snap to mirror plane; and when you select multiple points at once, you can rotate the group with the twist of your wrist; you zoom in and out in a similar manner as on your iPad, but you need both arms to do it. And that makes the whole creative process even more physical. Which is a good thing.

Learn more about Gravity Sketch at their website:


Using the projection planes helped me to create cross-sections (in red):


Obviously, it is pretty easy to bring the screenshot to Photoshop and sketch over some shaded forms or details.


Short process video:

3d tape drawing at Sketchfab.com:

Cyber Haiku

Adventures of Android Samurai drawn in Virtual Reality using Google’s Tiltbrush app.
Although it serves rather as a art-technique research project, it is also my tribute to my art heroes: Moebius and Miyazaki.

Chapter one:
Dragon Tower, Red Bridge, Charging Station,
Gone Fishing, Arzach’s Hat, The Dragon you’ve dueled

Chapter two:
Quest for the dragon’s egg
The Ferry To The Fairy, Hollow Moons

Movies and media

VR_redbridge VR_chargingstation VR_gonefishing


VR tribute: TRON

With Syd Mead on board, TRON movie could not go wrong. Revealed on 1982, it not only proved that computer could be used for storytelling, it also secured its position among the traditional media. John Lasseter, head of Pixar and Disney's animation group, described how the film helped him see the potential of computer-generated imagery in the production of animated films stating "without Tron there would be no Toy Story."

The blue/white vector graphics, design of the vehicles, the architecture ... freed from mechanical engineering, freed from the constrains of the real world ... shown us that is all possible what we dream.

TRON Legacy, the sequel, continued to amaze, and with designers such David Levy and Daniel Simon brought the original visual style to another level. The modern look did not harm the heritage and only proven the how well it ages. I can watch both movies over and over.

Last but not least, it is that subtle reference to Virtual Reality that secured TRON’s position in my VR tributes series.

Screenshots from the Tiltbrush application in VR:


Animated GIF’s from the process, posted on Tumblr:


3d sketch on Sketchfab.com:

360 panorama hosted on Facebook:

VR tribute: Architecture drawings

Architecture drawings were always my big inspiration. The combination of technical precision with subtle water color atmosphere ... Winking
So here is my take on it - Tilbrush and Keyshot.