Don’t. Just don’t use UNDO. Never. It will cripple your skills; it degenerates humankind! Just think about it: Every time you hit CTRL+Z, you admit that you’ve made a mistake and you let the computer to take it for you, to take away your responsibility to avoid errors. That’s not the way you learn!
Everyone also knows that computers are remarkably counterproductive when it comes to the right and the real experience. Let’s take a look at another example: Digital painting! Every time you zoom in, you lose the control over the composition, and you instantly focus too much on details, and most importantly you forget the big picture you should manage first of all.
Those two statements you’ve just read are nothing more than just a shameless provocation, and they are far from being correct. Not only those above-mentioned preconceptions are quite popular, but they are also quite understandable. It indeed feels strange to find out that your left hand subconsciously looks for the keyboard when we write a wrong character or draw an ugly circle on paper. Naturally, it is not surprising when we feel guilty at that moment.
However, they both are utterly wrong. First of all, it is the guilt we usually feel (And society we live in expects us to think this way.) when making mistakes. Mistakes are mistakes, and there is nothing wrong when avoiding them, but it is the mindset that accompanies them with negative emotion, with the fear of failure. The other case implies to the need to step away from a visual art-piece or design we may have in works. Of course, zoom is far away from that (much needed) physical act of few steps that make our visual perception more complete (Read more about it in “Walk it off” article.). On the other side, it doesn’t force you to flee away from the “big picture,” from controlling the composition. In fact, it allows us to profoundly dive into the details in a manner that has no reference to our “real” reality.
Here is the key message: The “Undo” may help you to correct your errors, but more importantly, it allows you to make them. Which takes me to the case of the evidence: My praxis requires a lot of visual communication, mainly in the form of sketches. And as I am on a day to day need to discuss and to share ideas in visual form, I have to be able to do it with a sound level of drawing skills. Although my education relied strictly on analog tools such as pencil and paper, which I still love and use, my ability to sketch improved by a giant leap with the introduction of digital media. It allowed me to practice one stroke over and over, as young karate apprentice would repeat oi-Tsuki punch ten thousand times until it was perfect. Stroke, undo. Stroke, undo … and again and again. Until it was perfect. Interestingly, once I was satisfied, my ability to make that one stroke (And many others I’ve learned this way.) did not disappear when I got back on paper. The hand and eye became both trained to repeat them even outside the digital reality.
David Chalmers, an Australian philosopher, and cognitive scientist claims that experiences in digital and “real” realities are equally valuable, and I am in agreement with him. The same applies to the skills you earn at one reality and utilize in another one. The digital reality, which exists only in our computers (of many forms and sizes) is defined, like all other realities, by the set of rules. In an example, we have laws of physics in our reality, while digital reality allows us to return in time and re-do our actions. I guess it starts to sound silly avoiding undo in this context, doesn’t it?
So, next time, just keep calm and hit undo. It is not your failure; it is your chance to play more. To draw the way you’ll like.
Although, undo still may not work with your pencil and paper.
You can also follow the conversation about this article at LinkedIn:
References and related reading
In 1985, GM made a decision that has changed the way we design cars (And anything else in general.) On that day, they've signed a contract with Alias Systems to develop NURBS modeling technology compatible with their current CAD tools. Just three years after, the software piece called Alias/2 became the substantial part of design process among the most of the industry leaders, including brands such as Honda, Volvo, BMW, ILM, Apple or Sony. Computer Aided Industrial Design was born and since that day digital modeling and visualization anchored themselves in the development of the human-made products, alongside to traditional model making with wood or clay.
It was natural to expect that digital modeling would replace the clay modeling. As nobody uses mechanical typewriters to write, except few pathetic hipsters probably, it seemed to be inevitable. What is interesting though, even after many attempts to fully digitalize the design process, it has never entirely happened. Despite today's democratized access to computers, despite all the high-resolution-room-sized screens or VR lately, the clay never disappeared from the process. It seems that when it comes to full-size models, the choice is clear and it is the clay, not the computer screen.
There is a number of reasons why it would make sense to avoid clay modeling. First of all, it takes too much time to build model manually, and even NC milling the digital model out might be not as direct as it appears. Changes on sculpted surfaces are a relatively easy task, but they have to be scanned and re-surfaced with CAID tool, as the rest of the process is digital. Additionally, working with the industrial clay requires specific conditions, such as well ventilated room as it may contain sulfur.
So, why are we still using clay? Is it just a pathetic choice made by prominent chief designers? Or is it that one joy of pushing the model to the sunlight? I guess everyone who has ever experienced it, would confirm that sheer satisfaction of walking around the model outside the modeling hall, but that won't serve as a convincing reason to spend so much time and money on clay modeling.
There is no clear answer for that. In an example, it is tough to accept Chris Svensson's opinion, (He is the director of design for Ford’s North and South American operations) which he shared with the Wall Street Journal in 2014: ‘We always came back to clay.’ The problem is, he says, digital projections can’t accurately show how light will play on a car’s surface. ‘You can’t replicate the sun.’ While it may sound just about right, it is far from the truth. Today's digital tools are much more precise in its way of controlling and evaluating highlights than anything we know in the analog world. As of today, we can simulate pretty much any lighting scenario, and support the visual fidelity with physically correct shaders and materials. We can even present design models in the virtual reality where we can see them in their real size and observe them from any angle as we turn the camera view in space at the front of us. Still, it fails to deliver enough stimuli to judge and evaluate the forms accurately. Nevertheless, he succeeded in pushing me in a right direction. If we can replicate the sun, what else we may need to replicate "real" visual experience with digital simulation?
It appears that we can generate digital content that is convincing enough to satisfy our eyes. Yet, there is much more to visual perception than just our eyesight. Neuroscientist Anil Seth reveals the truth in his speech “Neuroscience of consciousness“: "What we consciously see, is our brain's best guess of the causes of its sensory inputs." David Eagleman from Stanford University adds: "Our brain continually creates a visual model of the outside world refined by our eyesight and combined with proprioception." Just add to it, he also claims (in his book "Incognito") that we don't even see fully in 3d, but we instead of it calculate our three-dimensional mental image using the different viewing angles generated by the offset of our eyes, our head orientation and our body movement in space. By the way, this theory also explains why some people with a one eye injury are still capable of perceiving the depth.
So what does it mean? And does it have anything to do with our case of clay modeling? In fact, we have a lot to consider! The thing is, that whenever we walk around the observed object, we are adding additional information needed for the better perception. As we tilt our heads, as we walk around it, we continuously improve our inner mental image with new viewing angles. At the same time, our brain uses all the senses of our own body, our height, the length of our arms and also our proprioception to refine our mental judgment of the size and proportions of the object. Our brain also compares the object with another object around, especially with objects of known size and proportions: such as the human figures. All of the above-mentioned inputs improve the way we interpret the observed object. If any of it is missing in current observation experience, the brain’s guess is incomplete.
We can unmistakably produce hyper-realistic, highly detailed digital visualization of the digital object, we can display it as a stereoscopic projection, but when we skip our physical body from such experience, the visual perception becomes far from being complete. At the same time, the clay model pushed to the sunlight, despite all its imperfections, will provide much more information about itself than any virtual reality immersion, if we can’t walk around it if we can’t use our body to complement our vision.
Do I suggest that we just need to implement the walking system into VR and achieve that required visual fidelity? I would say yes, but there is another path to reach the same goal. Our ability to correctly evaluated observed objects could be trained. The only problem is that it may take long years of clay modeler's or designer's practice, to earn an ability to see that skillfully. It is the skill that everyone can learn, as babies do learn to recognize the faces or to understand colors. Although, it may take years. So for the rest of us, we have to walk it off.
You can also follow the conversation about this article at LinkedIn:
References and related reading
Following images were all rendered with Keyshot and edited with Photoshop. The models were made in VR using GravitySketchVR.
We were testing different riding positions, support handles, and hard/soft/wet surfaces.
Please register at www.scarpar.com for updates on production availability.