Touka 3D Scan

by: Volkan Eke


Using the 3D Scanner felt fragile, suspect and extremely careful. I started by pointing the camera at the Touka figure placed first on the stool. The app communicates instructions and warnings too as soon as the user strays off the path. At first, the stool stood in between two desks, leaving me very little room to circle around the object. Then I put the figure on the desk, but this time I had to make a wide arc around it, and the app constantly told me to get closer which was impossible due to how big the desk was. After a few tries in these two spots, I finally took the stool out into the open with lots of empty space around it, under the curious glances of others who shared the making space with me that day, put the figure on it and circled around the figure like I was doing a photoshoot. It was a strange mix of technical precision aligning the view on the surface of the iPad with a mental projection of the wide circle and angle of view that I thought I had to maintain, and a careful but slow dance around a sitting object, while trying to satisfy the perpetual warnings and demands of an application.

This made me wonder about the role of mastery given to the user over objects and material, and how technological items recruit that mastery to standardized ways of being operated. It was also quite striking to experience the extremely physical nature of the exercise, moving with a bent, trying to have steady hands as the slightest vibration could set the camera angle off and I would constantly stop to reset – or try to find the original mental line I was trying to maintain, completing a full circle. There were times when the app told me it got the full %100 360 degrees view of the item, except I knew that I hadn’t yet completed the full circle so I kept moving – not knowing whether I should stop or not, relying on nothing other than guesses. Conversely, I also made several attempts where I had stop before it reached %100, where I felt like I breached the trust the app placed on me. Begging questions about the tacit again: How much knowledge can the instructions, no matter how painfully detailed or deceptively simple they are, written on the surface of the app could communicate in terms of how physically taxing, or how much of it rests on intuition, or negotiating one’s intuition with the standardized protocols of a third-party app, and a general-purpose device that wasn’t designed for this specific task in mind?

Overall, the 3D scans I managed to get were quite, how should I say, unreflective of the physical shape of the figure. They looked more like nightmarish mismatches between the object and how you would remember it after not seeing it for years, or like what I could only imagine to be one of many sad castoffs of the early attempts at creating something a reality constructor who doesn’t want to talk about their past made long ago. It was exceedingly unsuccesful to the extent that I was quite convinced that 3D scanning this figure was either beyond my capabilities or I was doing something fundamentally wrong. Tugba, the person on point for the lab that day, could not really help me with that. But she did say that this is usually how a lot of people walk out of the experience.

Once the circling-scanning-dance is over, I can see that a new object appears in the list of scanned objects on the app. The object is seen as a thumbnail photo of the actual object. But once I tap on it with my finger to see what options I’m given for interacting with the scan, I’m told I can either pre- view it, or send it to my email. I tap preview and the app shows me the scan in a 3D environment (meaning I can use my fingers to turn it around and see it from every angle). It looks absolutely, weird. The overall shape is reminiscent of the physical object, by some remote description perhaps. But as I turn it around, I see that it looks more like someone chewed the object and spat it out, with strange dents, washed out features, with an overstretched texture etc, resulting in a strange amalgamation of the object and the background. If I instead choose to send it to my email, I’m asked to choose a file type. I quickly consult with Google because I want to use the 3D scan in Unity, what file type would work well with it. Google says go with ‘Obj’. I go with the option ‘Obj’, in addition to the other 2 options too (Ply and Wtl). You never know when something’s going to break. Besides, it’s not like this is my own ipad anyway, so better be prepared.


As I reflect on the practice, the first thing that sticks out to me is the deceptively simple/minimalist design of these devices (the tablet, the camera and the app too) against the actually highly physical and messy practice of the movements one makes. Moreover, it doesn’t just stop there. I constantly had to picture in my mind’s eye where I should take my next step and how I should move so as not to disturb the camera’s operation, nor the fragile balance of the object. First thing I was told when I picked the device was the object has to stand up straight on a level surface (the second being that I have to circle around it). Nowhere in the DIY videos I’d seen is this really emphasized, mostly because they are supposed to give a sense of process for the highly edited videos they make, as opposed to the technical nitty gritty details of it. I had to improvise using a plastic stand I had with me to make the object stand in suspension in mid air. Unexpectedly, the camera captured the plastic stand along with the figure, and their surfaces got blended together in the previews that I got.


I remember sweating a lot trying to keep my balance for two hours straight doing many scans. I never succeeded in capturing the object as I see it with my eye. Though I’m not sure if this is because of my competence. Not sure if the results would be different if I spent a couple more hours, or used a different app, or a different device. At the time, I was told it’s mostly because the object is too tiny, and I was content with taking that explanation at face value. Because I was mostly fascinated by how much of one’s body is involved in using this technology. And this is a very tacit aspect of it: Not even DIY videos documenting the physical character of 3D scanning could really convey the experience as it is simply watching and not doing it on one’s own. So abandoning thinking in terms of competence in favor of something else, I started thinking about the many craft processes we saw in class so far; like paper-making for instance. Those are described as physically intensive processes. Then thinking about how one can communicate this aspect of the process, I remembered Collins’ call to focus on what remains fixed from one instance to another. While these technologies are advertised as facilitating and offering solutions, they still entail physically intensive methods. Especially considering how I came up with the 3D scanning method, as a solution to shaping polymer clay by hand to make it look like the figure, this was startling: It was almost as equally a handcraft as using clay. If you have shaky hands, you will have equal trouble with both. I had already started considering going back to the clay method. That way, I would at least not have to rely on designed points of entry into interact- ing with the substance (such as previewing or emailing).


The other thing that kept jumping at me was, as I would occasionally stop to take notes, I realized I could not stop using 3D design terminology that I already am used to thinking with (texture, shape, objects, polygons, pixels, etc.) As I kept circling around the figure with the scanner/tablet in hand, I could see the figure being covered gradually with some white,…something, that I can only describe as texture. The problem is I took the notes not just for myself, but for an audience in mind (hello readers). But the mental struggle I had was not with just my imagined audience. As I have prior knowledge in 3D modeling, texture has a very specific reference in my mind. And what I was seeing was not that. So there I was, pausing myself awkardly trying to figure out how to refere to this thing I didn’t have a proper word for, that I didn’t know until then that I didn’t have a proper word for, only to end up taking photos instead.

Questions that arise

There is an undeniable difficulty that arises from using words to describe the process. Whether it’s because words may pertain to different meanings in my audience’s mind is secondary. What’s primary is that I’m having an inner debate with myself through my projected use of words, as I’m thinking of how to communicate what I just experienced through some form of documentation. This made me wonder about whether documenting, which is essentially a descriptive task, is a problematic in the context of the average user of these technologies, and whether reflective processes in general pose an impediment to the task of doing, or whether it is always part of the practice. If I had designed the scanner and the app myself, I probably would not have had this type of inner debate. Which also brings into question the artificial boundaries between the designer and the user and how that boundary shapes the way one comes to reflect upon their own practice.

Leave a Reply