Interview with Design, CAD and 3D Scan/Print Luminary Arthur Young-Spivey
Arthur is one of those users’ names that you’ve seen around the CAD world for a long time. He’s an industrial designer, so our paths have been parallel-ish in many ways. We were often interested in the same sorts of things – pushing the limits of the software in the curvy-shapes direction.
I recently decided to do an interview with Arthur, like I’ve interviewed several other folks, and he agreed to answer some questions.
1) I’ve seen your name around CAD forums for a long time, and we’ve had a little interaction over the years, I don’t even know how far back. There are a couple of people who were in that group that I’ve had a lot of respect for, and your name is definitely one of those. I know mostly that you’ve been involved as an industrial designer in the NYC area.
Many thanks Matt, it’s great to being interviewed by you because for me as you are among the original OG’s in the CAD community, especially Solidworks. Along with the likes of Wayne Tiffany, Mark Biasotti, Devon T. Sowell, Ed Eaton, Paul Salvador, and Richard Doyle among others were some of the original people that I started meeting around Solidworks.
If memory serves me correctly it was around around 1998/1999, when my first encounter with most of the people that I still know to this day, hung out. I dare say one of the original Solidworks users forums that I believe became a Google group “Comp.CAD.Solidworks” or something to that extent. I was a major rookie back then who only had minimal 3D software exposure at the time because in school we learned CAD-Key, Alias, and some Pro/E. But one thing that I really found existing was learning how to figure out software and making the most of it that would push the envelope.
2) Can you give a little history of how you got started on your current path, how that connected you to CAD forums, and how you got to be in ID in NYC, and your involvement in the CAD world – how CAD works in your process?
I got my degree in Industrial Design from Pratt Institute that has a rich history with its training program very much rooted in the Bauhaus design movement. I also did a brief stint in Architecture, but that felt too restrictive at the time but quite interestingly would become some of my best client’s to this today.
After getting out of college I retired for about 3 years and did an easy retail management job selling rollerblades, snowboards, and skate boards. It helped get my mind off design because I was exhausted at the time. But as life would have it, not too far from where I worked there was a small design studio, solely run by Lisa Smith (lisasmith.net). When I met her she told me that she needed help creating prototypes on a project. The method that she was looking for I would describe as similar to turning clay on a potter’s wheel but then an extra step is added to the process where plaster is then turned/poured over the clay to get the final shape. This just so happened to be a prototyping method that I learned in school and she was excited that someone really knew how to do it.
At the end of the three months we completed the project and showed several of the designs to companies like Waterford Crystals, Rosenthal, and Nambe.
From there, if the company liked the design, they would license them from her and go into production.
I thought that would be the end of things as I wasn’t a full time employee, just freelance work but she then tasked me with both upgrading her computers, she had an Apple IIc, and our 3D software…she was a hardcore Vellum person and rightly so, there really wasn’t anything native that ran on Mac OS at the time. I then contacted several companies to make inquiries about pricing and arranging demonstrations of the different CAD softwares. I whittled it down to three companies who sent in their sales people to give a demonstration of Inventor, Pro/E, and Solidworks because I at least knew that we needed more of a 3D CAD system then something like a VFX (visual effects) software like Maya or 3D Max which weren’t really targeted towards the industrial design community at the time. I personally wanted Alias but the price point was well outside of the budget given that we felt it was needed for the studio.
Little did I know at the time but this decision was a very pivotal one because of where knowing this software has taken me. In the end the decision was largely based on the sales guy’s demeanor, how he presented, and that he was willing to show how we might use the software to do the types of designs he saw in the design studio. The other demonstrations were just canned show and tells and also very arrogant and pushy. Ultimately we chose Solidworks and what an amazing journey it’s been so far.
Though I had not yet used Solidworks before, the solid shading was by far a better look than the wireframe models on the screen back when I used CAD Key. While the pay wasn’t the greatest she did give me the keys to the studio and having 24 hour access meant that I could practice after she would leave for the day. And that’s just what I did…. I fumbled around feeling out of sorts because I couldn’t figure out some basic things to do (i.e. like getting back in to edit a sketch to change the geometry, or even knowing that you could change the plane/face that the sketch was on). So this brought me to searching around the web trying to find some answers and initially found Paul Salvador’s (www.zxys.com) and Mike J Wilson’s (www.mikejwilson.com) website. Talk about gold mine, there were already made models in Solidworks, which I downloaded and would study the history of how the model was made by rolling back the history tree. Couple that with the Google-Group and I had the start of some great ingredients. The main users in the Google Group were engineers at the time and back then punches weren’t pulled when “silly” questions were asked but we all had to start somewhere.
Fast forward to about a year later, we had been working on many different designs within the Home/Office Furniture, Lighting, and Table Top design world. Towards the end of that year Lisa had taken a four day training class with a reseller, who I ended up working for years later, and she came back with the Solidworks Essentials Training book. It was a standard Solidworks training book that got me to understand a lot more of what the software could do. In that year, though I had gotten to be more proficient in Solidworks, I had picked up a lot of bad habits which, combined with not knowing all of the features in the software, just made for lots of errors in the Feature Manager Tree. Going through the book helped me to correct that and also know that I needed to take that same training she did and at the end of those 4 days I definitely came out on the other side with a much better understanding of parametric modeling and the need to properly lay things out, as much as one can, ahead of time. Though it still baffles me to this day why there was the limitation of multi-bodies in a Part file in Solidworks.
One of the first major projects that I tackled using Solidworks was a stacking chair (See image below) that, to this day, I still have the original files that I first started with, 6 months, and year, and two years along the way, which, more than anything else, shows the progress of how much I started to optimize my modeling methods, features used, and just overall understanding of the software.
On the face of it, this isn’t that complicated a model to make save for that arm rest area that had a compounded curve going in 3 directions at once. If I were to show you the initial modeling method that I approached it with, your jaw would drop. Over 300 features for just that one area because, as I would learn later on, projected curves are a way better option than 100 planes, a point on each plane, a curve through 3D points….etc. This was always an example that, as professor at Parsons The New School of 18 years, I would show my students just how awful even I started out as and that over time I got better with understanding the software. It’s also the first time, using an Assembly and collision detection turned on, that I learned how to check for interference. Probably the most interesting aspect of this project, and this is around 1999-2000, is that we had the frame made on a CNC tube bender which, at the time was rather hard to find a vendor that would bend the compound curve part of the arm and then 3D printed the wooden area and assemble it in a week so that the client could sit on it and give approval to move forward with the manufacturing.
Fast forward to about seven years later I started working for a Solidworks VAR specifically because of my extensive experience with 3D Printing. Even as experienced as I was knowing how to use Solidworks I paled in comparison to the depth of knowledge that I would have to know to help offer training and tech support for clients. At the time can say that I was definitely ahead of the curve because I had to cover Solidworks, 3D Printing training/installation, and 3D Scanning. This is where I really began to expand my horizons beyond just Solidworks. Sure I had used Rhino3D, Alias, and some AutoCAD but what I really started to understand was the trifecta that these three areas of design and manufacturing could and eventually could expand well beyond just the engineering field. Working with clients such as Macy’s, Hasbro, and Boston Dynamics in industries such as Architecture, automotive, medical, and many more that I saw an explosion happen right before my eyes.
2) Talk a little about the special environment for your kind of work in NYC.
Being in New York…there are just so many layers of business and people to interact and connect with. People whom I’ve trained in Solidworks at both the VAR as well as at the college over the course of the past 30 years are connections that, to this day, are also close friends and clients. Also from a freelancing and consulting point of view there’s no shortage of business to work with. One of my favorite cleints fabricates stair cases for homes and business and there’s never a project that they work on that’s exactly the same. And if there’s one thing that I cherish working on is custom designs rather than mass manufacturing. New York is filled with tons of just eclectic venues and design projects that I think really does make it one of the more unique places to work. The talent pool of people in their respective industries are always aiming to push the envelope in new ways that I find inspiration in when working on personal projects… Some of which are still in progress over 20 years. I am both being too much of a perfectionist about some of the details I want to see and also not wanting to just make more junk in the world that will end up in the trash.
3) Your focus seems to have changed in the last year or so. Catch us up on how you came to Direct Dimensions, and the role that you play in that organization.
Yes this new direction that’s 99% focused now on 3D scanning is new to me in terms of companies but I’ve known them for over 15 years. When I first started using 3D Scanning back around 2002 – 2003 it was in the vein of capturing clay or plaster models that I made by hand and needed to digitize to then get into Solidworks. It’s amazing how far along the capability of what the scanners can capture today. From overall resolution, fine details, full color and textures just with your phone or DSLR camera with which to then post process into a 3D model.
At Direct Dimensions the company has been in the 3D Scanning arena for over 27 years. There isn’t an industry that we haven’t worked in. Industries such as movies and gaming where we scan people, objects, and buildings to be used by the VFX department. Architecture and scanning buildings, both interior and exterior to then convert the point cloud into Revit 3D models or 2D drawings for AutoCAD. Automotive/Aerospace to scan cars, planes, tanks. And museum/artist objects generally for archival purposes and documentation but also for reproduction and fabrication.
Joining the team was a no brainer, some of the best minds I’ve had the opportunity to work with mainly because there’s just no project that’s exactly the same. The process used, the software leveraged in the workflow, and handling 3D data of all types is something the different team members are just captains in the respective fields.
4) Talk a little bit about the overlap between doing original design work, and working with point cloud data. How can/should point type data be used by designers, especially organic shape designers?
Definitely over the course of the past 40 or so years, point cloud and mesh data weren’t generally file types that designers and engineers work in or with. It just wasn’t part of their workflows to get the job done. If there was a real world object that needed to be put in CAD that was a job for ruler/caliper and, depending on the object, weeks of reverse engineering. And while there isn’t a perfect 3D Scanner for every situation, the level of details that can initially be captured is well down into the micron level. This means better initial data to start with to then begin the reverse engineering process. And this doesn’t mean engineering only, for example topology optimization when it comes to assets for movie or gaming is a very different type of process than say what’s needed for objects going into CAD.
Let’s talk about the automotive industry for a moment and making full scale cars. This is something that I personally love, which is the hand craftsmanship of making that model in clay. There’s just nothing else like exploring a model with your hands and eyes that even using VR headsets can replace. And the only way to really stay as faithful to that original clay sculpted model is 3D Scanning. There are quite a few different approaches and methods that all have pros/cons which are quite often combined in post. So a photogrammetry approach, taking 100’s if not 1000’s of photos of an object and then essentially stitching that together to create a 3D model to laser based systems that passes over the object to capture all of the details. Cars have some of the most organic shapes and compound curves…some subtle others grand and to the human eye make all the difference in the world to the designs aesthetics. 3D Scanning helps that translate from physical or digital.
5) How do you see these technologies being applied in the mechanical machine design world, or just areas outside of the organic industrial/consumer design world? Touch on the intersection of mainstream CAD tools with authoring/editing of point data. Also, do you see any relationship between subd and point cloud.
For artist, designer, and engineers sketching on paper is always a nice start and there are hopefully times when creating physical mock-ups are used. And here’s where 3D Scanning can come into play is that each mock-up, as it’s progresses, can be 3D scanned. Where this can really become of interest is that while one designer progresses in one direction physically, another designer can explore the design digitally. The tools offered in more of the mesh modeling softwares (ie zBrush, Modo, 3D Coat, Blender….etc) offer similar tools as if doing real world sculpting with clay. Later on the two methods explored can then be brought back together again…either the digital model through 3D Printing or the final physical design 3D scanned again.
Also a very key area for engineers/engineering is first article inspection…checking that a design that’s coming off of the factory line, for instance, is matching it’s CAD counterpart and not only if not but where. This can also give feedback about a given forming tools life span and if it’s starting to wear after it’s expected lifetime.
3D Scanning, especially CT Scanning, is really starting to open up avenues within so many different industries that we’re still just scratching the surface. The shoe industry has really begun leveraging CT scanners to see the interior wear and tear of a sneaker, maybe there are some unseen failure points that they can now see how to improve the initial designs of. Seeing through objects gives us a Superman X-Ray view into things and even parse the scanned data by say material density or other parameters that gives us a very new feedback loop.
The current range of apps on your phone, Polycam, Everypoint, Scaniverse, and SiteScape are but a few of over a dozen or so apps all using the Depth Sensors and cameras to help capture objects. Some will deliver the data as a point cloud which there are some open source software like Cloud Compare and MeshLab that the data can be opened in. Other apps will generate a mesh of the point cloud which then offer a wider range of 3D applications that can open that data type. But this is an area that understanding the 3D data type and the 3D software being used is vital. General CAD applications a while back were never really great at working with mesh data. The kernel/coding was just not meant to deal with 3D data like that. So usually a translation software is needed to help an (.STL), (.obj), or (.fbx) mesh data convert to a BREP/Nurbs model (.step), (x.t), or (.iges) file type.
Even SubD data, which I would kind of place somewhere in-between raw mesh data and BREP data, has become more prominent in the design process. While the SubD tools available in the CAD world are still very much behind the more VFX based softwares it’s at least a great addition that has been recently added. Yes it’s very much software specific in terms of the overall maturity of the SubD modeling tools available it is an amazing way of designing and modeling directly that parametric model isn’t nor should it be. As a modeling method, especially for shapes that are more organic in nature, SubD just offers a level of capabilities that, while “possible” in CAD, meaning with some very well planned out sketching and features, can take up way more time and cause way more hair pulling failures that history bases systems bring to the table.
6) What sort of development would you like to see in tools in the future for dealing with 3D geometrical data for design, scan, and downstream applications like AR/VR, product design, medical applications, architectural/structural issues, etc… what are the functionalities that you wish you had to extend your current capabilities?
This is a very interesting area/topic that’s very much at the forefront of a lot of what I do and think about across a multitude of converging inflection points. When you think about what’s happening today, in comparison to just 20 – 30 years ago, most industries stayed within their verticals and most people stayed in their lanes. With the advent of 3D software capabilities and raw computer computational power and the many different platforms that are able to support this convergence it’s amazing how these worlds are colliding into each other.
Let’s say you studied architecture with a heavy focus on commercial or city planning 30 years ago you may have gone out to work for some firm that specializes in that and stayed within that field. Today this same person could go work for a video game company because the level of details and planning happening within these virtual cities is well beyond someone writing code is going to be capable of doing and vice versa. The term Digital Twin is one of the biggest areas of expanse today across every industry that is in many ways being driven by the metaverse and AR/VR. Architects are designing the physical buildings that we go in and some of the people on staff are video game designers because the aim to show and immerse clients into a digital VR “living” model is a vital aspect.
Here’s an interesting convergence point, while not as widely used just yet I do know that architectural and engineering firms are 3D Scanning as construction so that they can verify what’s being built, are things in schedule, if not why and make course corrections right away rather than halting and going back because of some big change order. We’ve seen change order reduction by more than 15% – 25% when a client has combined the point cloud from the 3D scan onto the Revit model that’s being done with VR headsets. This also gives the owner/developer 3D data that, say 20 – 40 years down the road and renovations need to be made there’s no need to dig up the blueprint you can just open up the point cloud/Revit model and see what’s behind the walls.
We are definitely seeing some of the amazing aspects of AI and machine learning starting to make its way into 3D scanning. While still very early on Apple has shown an SDK (software development kit) that when using the phone will automatically recognize objects like a refrigerator, stove, desk, chair…etc. This basically leap frogs over having to process the point cloud or mesh because it is generating a CAD model in real-time. AI recognition of an object means there could potentially be a 3D database that is pinged against an existing vendors model which can then give additional context about said object.
7) There are students and graduates out there. Some are thinking about careers that involve computer generated 3D in one way or another, whether product design, games, 3D art, 3D scan/print, architecture, engineering, etc… What guidance would you give them to help them get a grip on the technology that they’ll need to be familiar with to make it in the real working world?
WHEW…..This is like asking how big is the universe or is their life on other planets? This is about as far and wide as it gets as there isn’t specifically any start or end points anywhere along the way. There’s just so much to learn and no one definitive way to learn it especially when considering how much overlap is happening within these different disciplines that are converging. Having been a professor at a college here in New York this is an area that we discussed as it related to the curriculum and learning outcomes. One of the major takeaways that I can say that we all walked away with is that 4 years wouldn’t be enough time to truly get a grasp on all of these different aspects/concepts and even if, lets say, the program were extended by a year or two you’d still only just be scratching the surface. The final aspect is what career would the student choose to be in if they did become this all around well rounded education? Let’s take the Metaverse for example, even though it’s a “singularity” of sorts it spans wider, farther, and deeper than just about anything else combined and companies are seeing it evolve in ways that they are not even sure where it’s going.
All that said, here’s what I would suggest to any current students…. Pick a lane that one would consider to be their main core strength. I think that it’s easier to branch out later once one has a solid core with which to depart from. I will say that in all aspects still knowing how to write code/scripts will also lend itself to being helpful as well. Be it Python, C++, as well as the specific scripting language found inside of the different 3D softwares. This really becomes relevant as it affords a level of customization that the standard tools don’t.
Gain an understanding of the different workflows within that main lane. Consider the Product Design industry and the many ways that items can be manufactured…. Injection molded, sheet metal, additive manufacturing, blow-molded…etc which all require a subset of skills to be mastered. This isn’t too different from what happens in the VFX industry… lighting, texturing, rigging, animating….etc. In the Architectural industry, usually most programs are five years because there is so much that needs to be learned, is similar…. Space planning, city planning, structural engineers, and digital twins are subsets of skills.
As mentioned previously two specific tipping points that has made the overlap that much more prominent is Additive Manufacturing, 3D Scanning, and Metaverse with the latter being pulled heavily by the digital twin and NFT movements. Within the VFX industry the making of physical models/costumes, that were made to help sell the scene, made a big push towards “all things” being digital which didn’t look all that good. Once Additive Manufacturing really saw real world usage then VFX users had to learn to properly output models for physical means of production. When it comes to VR/AR, models made for manufacturing by CAD users are generally too heavy and overkill for what’s needed for digital models. Depending on the platform where the models will be displayed (Sketch Fab, 3D PDF’s, VNTANA), CAD models have to be optimized in new ways that are far different than what’s needed for manufacturing.
One of the most important pieces of information that I would suggest to any student is take the time to understand the different 3D Data types. While that is a start it’s also vital to understand how the 3D software that is being used handles that 3D data. Quite often CAD softwares did not like working with mesh data. Those softwares preferred BREP/NURBS type of 3D files as that’s what the codes were originally written for. This is true going the opposite way if trying to bring a CAD file into a VFX software like Maya, 3D Max, Cinema 4D…etc. From point clouds, to meshes, to CAD data types and now the emerging implicit modeling methods found in software like nTopology, 3DXpert, and Altair Inspire are examples that are helping to push models out, specifically for additive manufacturing, that no CAD or VFX software can help describe and tell the 3D Printers what to do. This is a whole new area that I would suggest for another article as it’s really is just at the tip of the iceberg in terms of what the implications of this process is and where it can go.
Wow! Thanks to Arthur Young-Spivey for an insider’s look at the intersection of CAD, 3D scan, 3D print, and visual effects. I wish I had time in the day to research all of the software tools mentioned here. Thanks again for taking the time to help people understand how important these kinds of data and the tools used to manipulate them have become.