Kids just sit only on the internet, and they have no clue what the real world is.
Or, for those who grew up before the Facebook and YouTube: You’ve been watching TV too much. You’ve been reading too many books, too many graphic novels.
Now go outside and get some real life.
We’ve heard that so many times. All of us. And not just from our parents. Hundreds of years ago, when books became widely available, there had to appear some parents were undoubtedly complaining about the amount of time their kids spent on reading. And less than half a century, it was the TV that was stealing the precious reality time and replacing it with worthless daydreaming. It was the PlayStation twenty years ago, it is the YouTube and mobile gaming today, and we will blame the Virtual Reality tomorrow.
Apparently, it is right for you to be outside, and to experience the “real” life. On the other side, to dream, read, watch, listen - to be inside, not that much. While no one would complain about the benefits of the day spent on the beach (or snowy peaks in my case), the time spent reading, or generally consuming the media, often receives negative marking. We are told that “real world” experiences are good, and those others are bad or have much less value. I must admit that reading has lost some of the negative connotations recently, but it remains in this “bad” zone.
What’s so wrong with such a categorization?
Let’s talk about what is real for a start. Neuroscience claims (According to Anil Seth) that we’re all hallucinating all the time; and once we agree about our hallucinations, we call it “reality.” Our brain continuously creates the model of the outside world, and justifies it with the inputs from our sensory system, including the signals coming from our interoception and vestibular system. Interestingly, our brain doesn’t stop doing it even when we sleep and still keeps delivering the “hallucination” we perceive as real once we wake up. Another take on the same theory is a popular implementation of the Bayesian Brain hypothesis, according to which the brain maintains predictive models of the external causes of sensory inputs and updates these models according to some version of Bayes’ theorem.
So, the reality is an interpretation of the outside world, made in our brain. It is our best guess, and we need our body with all of its sensors and arms to help our mind with building such “model.” All of the sensory inputs and interactions contribute to it. With that said, in the example, why reading, which is an essentially synesthetic process that often combines the two sense of hearing and sight, should be excluded from valid forms of an experience? What makes this experience less real? One may say it is not real, as it is an experience based on the content which is a result of someone else’s mind’s hallucination of the real world. Does it mean that someone else’s “personal hallucinations” of the outside world are less valuable? Maybe. But they are still part of our external world, and we perceive them with our senses, and that qualifies them as an experience.
David Chalmers (Philosopher and Professor of Philosophy and Neural Science at New York University.) says “Virtual reality is not a second-class reality” and I join him in this statement. He describes the role of the avatars as virtual bodies, playing the same role in digital reality as our bodies in “real” reality. As the body is the source of much value in life, virtual one does the same in digital reality.
We often say we live the life of, that we become for that moment, a book hero or game character. We perceive their fictional reality through their fictional bodies. The question is how it even may happen? We can’t observe their reality with our senses as we are not physically there! The answer is pretty simple, yet surprising. We can perceive their world when we embody the character's body. Although their feelings exist only in description, or they are visualized in case of movie or game, we can still map them into our mind and make them contribute to our reality model, to our hallucination.
As it turns out, we need a body capable of sensory inputs and interactions to support our brain in creating that model of the outside world. Then, this body represents an interface with the outside world. Now, let's think about the embodiment - as we can naturally map external senses or motion systems and use them as if they would be part or even replacement of our current body. Such an outer body doesn’t have to be necessary physical, as it turns out. We routinely collect the data from the fictional or digital bodies, and they do contribute to our reality perception.
It doesn’t matter where we get our experiences. We are just a brain in a jar, and body is an interface with the outside world.
There is something extraordinary here: We don’t need to drill in the brain with stainless probes to add extra senses. We can simply map them to existing ones. David Eagleman has experimented with echolocation sensors assigned to the muscles around the spine of a blind person. He used a vest that translated sonar pulses into vibrations and which allowed a patient to read this input as his sight. It required some time to learn this skill, but patients reported that they perceive those little massages of their back as SIGHT.
I’ve mentioned earlier that sensory and interactions don’t have to be physical: We are able to map even digital bodies and their interactions and sensory systems and interact this way with the digital realities. The same applies to fictional bodies, where the senses and interactions are “just” described or visualized.
Next time, when you will be sent out to experience the real world, you can argue that you already do. That you always do.
Feel free follow the conversation about this article at LinkedIn:
References and related reading
Well, you don’t have to be new age guru, nor werewolf to experience an embodiment. In fact, it happens to you every day. Let me give you some examples.
“The mouse was just a tiny piece of a much larger project, aimed at augmenting human intellect.” — Doug Engelbart, inventor of the mouse
You have may never think about it, but one of the most common embodiments is your computer cursor. Yes, that little arrow which allows you to interact with digital reality of the computer 2d screen. Although it may appear that it is still your body that drives the interaction with a computer (through the negligible moves of your forearm and fingers), it is not true. If you want to enter the 2D world of the screen space, you need to utilize that pointy guy who happens to live there.
Let’s take a more in-depth look what’s going on there: Obviously, our physical body is not capable of direct interactions with zeros and ones, nor with their graphical representations. However, when we map our hand through the computer mouse to the moves of the cursor, and our eyes to follow it on display, we can do so. It is still our mind that drives it, but it is embodied into the cursor, while the immediate feeling of our physical body temporary disappears from the current reality and becomes obsolete. Just think about it for a moment: Do you say “I move my finger” or “I click at icon” when you want to interact with a computer?
"Cars are not a suit of clothes; cars are an avatar. Cars are an expansion of yourself: they take your thoughts, your ideas, your emotions, and they multiply it -- your anger, whatever. It's an avatar." — Chris Bangle: Great cars are great art
And how about this example: Can you run at 130km/h, carry 400kg of bags and take another four persons along with you? Apparently, our bodies can’t do that. Unless we use the car. Unless we embody into the car. Of course, one can say that car is just a machine that we control with our hands on the steering wheel, with our feet on the pedals and eyes looking at the road at the front of us. Well, isn’t it the same description as the one we’ve used when we talked about the computer cursor? Are you deeply and consciously aware of your physical body when you drive? Instead of it, we feel the road we drive on, all the vibrations. We accommodate our visual perception to the speed, and we hardly think about what our body does with the car. Our body is temporarily replaced with the body of the car, while our senses and locomotion apparatus is mapped to the modalities of the car.
“For real human beings, the only realism is an embodied realism.”
― George Lakoff
I could also use an example of a bicycle riding. Or skiing. In all cases, we don’t consciously think about what our body parts do. Instead, we think about it as bike-riding, skiing, driving and so on. Neuroscientist David Eagleman describes this phenomenon in this book “Incognito”: We learn to use our body as newborn babies, and we turn interactions between our body and mind into autonomous ”zombie” processes. The same thing happens later when we learn to ride a bicycle or to drive a car. We only think about our physical body during the learning phase, until these skills become zombie processes.
In fact, we experience the embodiment in many forms every day. Those various embodiments allow our mind to experience the realities beyond the capabilities of our physical body. Body, so to say, is a representation of the reality. Although some philosophers may complain that body rather defines than reflexes the reality, it has no importance in one specific case: Virtual reality. As there are no rules, nor laws that pre-define it, our embodiments are both reflection and definition at the same time. If we consider that the embodiment is a product of mapping new modalities to our sensory and motion system and that we can freely define this reality, what would be the perfect VR embodiment?
Feel free follow the conversation about this article at LinkedIn:
References and related reading
Don’t. Just don’t use UNDO. Never. It will cripple your skills; it degenerates humankind! Just think about it: Every time you hit CTRL+Z, you admit that you’ve made a mistake and you let the computer to take it for you, to take away your responsibility to avoid errors. That’s not the way you learn!
Everyone also knows that computers are remarkably counterproductive when it comes to the right and the real experience. Let’s take a look at another example: Digital painting! Every time you zoom in, you lose the control over the composition, and you instantly focus too much on details, and most importantly you forget the big picture you should manage first of all.
Those two statements you’ve just read are nothing more than just a shameless provocation, and they are far from being correct. Not only those above-mentioned preconceptions are quite popular, but they are also quite understandable. It indeed feels strange to find out that your left hand subconsciously looks for the keyboard when we write a wrong character or draw an ugly circle on paper. Naturally, it is not surprising when we feel guilty at that moment.
However, they both are utterly wrong. First of all, it is the guilt we usually feel (And society we live in expects us to think this way.) when making mistakes. Mistakes are mistakes, and there is nothing wrong when avoiding them, but it is the mindset that accompanies them with negative emotion, with the fear of failure. The other case implies to the need to step away from a visual art-piece or design we may have in works. Of course, zoom is far away from that (much needed) physical act of few steps that make our visual perception more complete (Read more about it in “Walk it off” article.). On the other side, it doesn’t force you to flee away from the “big picture,” from controlling the composition. In fact, it allows us to profoundly dive into the details in a manner that has no reference to our “real” reality.
Here is the key message: The “Undo” may help you to correct your errors, but more importantly, it allows you to make them. Which takes me to the case of the evidence: My praxis requires a lot of visual communication, mainly in the form of sketches. And as I am on a day to day need to discuss and to share ideas in visual form, I have to be able to do it with a sound level of drawing skills. Although my education relied strictly on analog tools such as pencil and paper, which I still love and use, my ability to sketch improved by a giant leap with the introduction of digital media. It allowed me to practice one stroke over and over, as young karate apprentice would repeat oi-Tsuki punch ten thousand times until it was perfect. Stroke, undo. Stroke, undo … and again and again. Until it was perfect. Interestingly, once I was satisfied, my ability to make that one stroke (And many others I’ve learned this way.) did not disappear when I got back on paper. The hand and eye became both trained to repeat them even outside the digital reality.
David Chalmers, an Australian philosopher, and cognitive scientist claims that experiences in digital and “real” realities are equally valuable, and I am in agreement with him. The same applies to the skills you earn at one reality and utilize in another one. The digital reality, which exists only in our computers (of many forms and sizes) is defined, like all other realities, by the set of rules. In an example, we have laws of physics in our reality, while digital reality allows us to return in time and re-do our actions. I guess it starts to sound silly avoiding undo in this context, doesn’t it?
So, next time, just keep calm and hit undo. It is not your failure; it is your chance to play more. To draw the way you’ll like.
Although, undo still may not work with your pencil and paper.
You can also follow the conversation about this article at LinkedIn:
References and related reading
In 1985, GM made a decision that has changed the way we design cars (And anything else in general.) On that day, they've signed a contract with Alias Systems to develop NURBS modeling technology compatible with their current CAD tools. Just three years after, the software piece called Alias/2 became the substantial part of design process among the most of the industry leaders, including brands such as Honda, Volvo, BMW, ILM, Apple or Sony. Computer Aided Industrial Design was born and since that day digital modeling and visualization anchored themselves in the development of the human-made products, alongside to traditional model making with wood or clay.
It was natural to expect that digital modeling would replace the clay modeling. As nobody uses mechanical typewriters to write, except few pathetic hipsters probably, it seemed to be inevitable. What is interesting though, even after many attempts to fully digitalize the design process, it has never entirely happened. Despite today's democratized access to computers, despite all the high-resolution-room-sized screens or VR lately, the clay never disappeared from the process. It seems that when it comes to full-size models, the choice is clear and it is the clay, not the computer screen.
There is a number of reasons why it would make sense to avoid clay modeling. First of all, it takes too much time to build model manually, and even NC milling the digital model out might be not as direct as it appears. Changes on sculpted surfaces are a relatively easy task, but they have to be scanned and re-surfaced with CAID tool, as the rest of the process is digital. Additionally, working with the industrial clay requires specific conditions, such as well ventilated room as it may contain sulfur.
So, why are we still using clay? Is it just a pathetic choice made by prominent chief designers? Or is it that one joy of pushing the model to the sunlight? I guess everyone who has ever experienced it, would confirm that sheer satisfaction of walking around the model outside the modeling hall, but that won't serve as a convincing reason to spend so much time and money on clay modeling.
There is no clear answer for that. In an example, it is tough to accept Chris Svensson's opinion, (He is the director of design for Ford’s North and South American operations) which he shared with the Wall Street Journal in 2014: ‘We always came back to clay.’ The problem is, he says, digital projections can’t accurately show how light will play on a car’s surface. ‘You can’t replicate the sun.’ While it may sound just about right, it is far from the truth. Today's digital tools are much more precise in its way of controlling and evaluating highlights than anything we know in the analog world. As of today, we can simulate pretty much any lighting scenario, and support the visual fidelity with physically correct shaders and materials. We can even present design models in the virtual reality where we can see them in their real size and observe them from any angle as we turn the camera view in space at the front of us. Still, it fails to deliver enough stimuli to judge and evaluate the forms accurately. Nevertheless, he succeeded in pushing me in a right direction. If we can replicate the sun, what else we may need to replicate "real" visual experience with digital simulation?
It appears that we can generate digital content that is convincing enough to satisfy our eyes. Yet, there is much more to visual perception than just our eyesight. Neuroscientist Anil Seth reveals the truth in his speech “Neuroscience of consciousness“: "What we consciously see, is our brain's best guess of the causes of its sensory inputs." David Eagleman from Stanford University adds: "Our brain continually creates a visual model of the outside world refined by our eyesight and combined with proprioception." Just add to it, he also claims (in his book "Incognito") that we don't even see fully in 3d, but we instead of it calculate our three-dimensional mental image using the different viewing angles generated by the offset of our eyes, our head orientation and our body movement in space. By the way, this theory also explains why some people with a one eye injury are still capable of perceiving the depth.
So what does it mean? And does it have anything to do with our case of clay modeling? In fact, we have a lot to consider! The thing is, that whenever we walk around the observed object, we are adding additional information needed for the better perception. As we tilt our heads, as we walk around it, we continuously improve our inner mental image with new viewing angles. At the same time, our brain uses all the senses of our own body, our height, the length of our arms and also our proprioception to refine our mental judgment of the size and proportions of the object. Our brain also compares the object with another object around, especially with objects of known size and proportions: such as the human figures. All of the above-mentioned inputs improve the way we interpret the observed object. If any of it is missing in current observation experience, the brain’s guess is incomplete.
We can unmistakably produce hyper-realistic, highly detailed digital visualization of the digital object, we can display it as a stereoscopic projection, but when we skip our physical body from such experience, the visual perception becomes far from being complete. At the same time, the clay model pushed to the sunlight, despite all its imperfections, will provide much more information about itself than any virtual reality immersion, if we can’t walk around it if we can’t use our body to complement our vision.
Do I suggest that we just need to implement the walking system into VR and achieve that required visual fidelity? I would say yes, but there is another path to reach the same goal. Our ability to correctly evaluated observed objects could be trained. The only problem is that it may take long years of clay modeler's or designer's practice, to earn an ability to see that skillfully. It is the skill that everyone can learn, as babies do learn to recognize the faces or to understand colors. Although, it may take years. So for the rest of us, we have to walk it off.
You can also follow the conversation about this article at LinkedIn:
References and related reading
We were testing different riding positions, support handles, and hard/soft/wet surfaces.
Please register at www.scarpar.com for updates on production availability.
Once famous “Piano di Forma”, so loved by Giorgetto Giugiaro, this Front/Side/Top view design drawing, is best in real scale. Creating a such a drawing is one of the oldest automotive design techniques, yet it remains an essential part of the process at most of the car styling centers. We call it full-size tape drawing. Used for both exterior and interior, it allows the designer to create full size drawing as a black and white outlines, which gives them an accurate sense of its proportions. Such drawings are used further in the process for clay modeling and digital sculpting.
Image on the top: The tape drawing at Bentley: https://www.youtube.com/watch?v=PuZJO2jGGe0
One of the advantages of tape drawing is the possibility to step away from the board, find another perspective and let our brain to evaluate the shapes and proportions from multiple points of view. Until now, such a physical interaction with designed objects was nearly impossible when using computers. Today, thanks to VR (HTC Vive) and applications such as Gravity Sketch, we can successfully re-create this process digitally. One of the advantages is the output of 3d CAD data and also an unlimited size of objects.
So here is the very first case study in this direction that I’ve made in VR with Gravity Sketch. I took the technical package data (Originally imported into Autodesk Alias) in the OBJ format and imported it into the application. Although you have to rotate your model by 90 degrees around the X-axis, because Gravity Sketch uses Y-up coordination system, the process of tape drawing itself is a piece of cake. There are few choices of tools, from freehand strokes to bezier-like splines, few types of stroke shapes, symmetry, and some other sculpting and modeling tools. I’ve ended up with basic round curve built point to point with mirror symmetry on. With no need to utilize too much of a UI, the “taping” is very intuitive, and yes, very enjoyable. As you get closer to the center line, points automatically snap to mirror plane; and when you select multiple points at once, you can rotate the group with the twist of your wrist; you zoom in and out in a similar manner as on your iPad, but you need both arms to do it. And that makes the whole creative process even more physical. Which is a good thing.
Learn more about Gravity Sketch at their website: https://www.gravitysketch.com/vr/
Using the projection planes helped me to create cross-sections (in red):
Obviously, it is pretty easy to bring the screenshot to Photoshop and sketch over some shaded forms or details.
Short process video:
3d tape drawing at Sketchfab.com:
Thanks to Photoshop and zBrush, photobashing and specifically kitbashing became the natural part of creative process. But what if we go beyond using functional components in a new and unusual context and dive deeper into the design process?
So here is the question. What makes Lamborghini or Apple products look like they do? Why we are able to instantly recognize what kind of brand we are looking at? Well, it is because of the form language they use. The brand is represented by its look.
What is form language?
Simply said, it is a specific and unique combination of forms and shapes. It is the well defined contrast between soft and hard forms, proportions or structure that makes your design unique. Just take a look at the world of animals: Fish has totally different form than let's say a tiger. Their bodies represent the environment and the way they live and act in specific form language.
Now, as far as we finally understand, let's distille the essence of particular form language into simple building blocks. Let's design a DNA of our form language with no constrains to function or manufacturability. Simply said, let's build an abstract sculptures which represent our vision. Once we are done, we can use those DNA blocks to build much more complex objects, while maintaining the consistent visual style.
Finally, what is Formbashing?
Formbashing is a creative design strategy which uses simplified abstract building blocks used to compose complex objects such as products, vehicles or architecture. These basic forms are built first as the abstract sculptures, and then applied as a functional element.
ABSTRACT -> CONCRETE -> SPECIFIC -> FUNCTIONAL.
See more examples at Behance: https://www.behance.net/gallery/27156741/FORMBASHING-Method