My Blog
Technology

How Generative AI Helps Bring Big Design Ideas to Life

How Generative AI Helps Bring Big Design Ideas to Life
How Generative AI Helps Bring Big Design Ideas to Life


I’m a terrible artist.

Though I dabble with 3D design, I have zero drawing ability and my painting skills are even worse. That doesn’t stop me from having an excess of creative ideas, though, and it’s demotivating not being able to bring those ideas to life. Generative AI, when used properly, can allow people with big ideas and little skill to carry those concepts into the real world.

Machine learning and AI are all the rage, with OpenAI, Google and others striving to give us large language models capable of natural-sounding responses. In the visual world, companies are bringing generative AI to art, allowing us to make images using nothing but words (Midjourney), or by creating and adapting photos with AI (Adobe). These tools have a chance to make art accessible in a way that’s never been achieved before.

I’m a maker, a person who loves to create physical things in the real world, but it seemed like AI wouldn’t really help me with that. Sure, several of my 3D printers, like the AnkerMake M5, use AI to spot errors in the print, but that’s rudimentary at best. I’d seen nothing to make me think AI could help realize my ideas. That is, until I saw a video from another maker, Andrew Sink, who used text prompts through ChatGPT to create a 3D object that could be 3D printed at home using code.

“I almost missed a flight because I was so captivated the first time I tried it!” Sink told me. “Seeing ChatGPT produce what looked like a 3D model in the form of an .STL file was an exhilarating experience.”

An STL file is a 3D printable file that uses a set of instructions to create triangles called faces. Sink used ChatGPT to create the STL file, completely circumventing the design process and putting it in the hands of AI, and it worked. It was a simple cube, but this was the first time I thought about how AI could produce tangible products in the physical world.

Sink is the first to admit that generative AI still needs some supervision by someone with technical chops: “Upon closer examination (as documented via YouTube Short), the file had multiple issues and required cleanup in the form of mesh editing, something that many users will likely not expect. This brought me back to reality, and reminded me to think about ChatGPT as a tool to be used in a workflow, and not a complete solution.”

However, it does open the door to something more. New companies have started springing up, using generative AI to create artwork from text-based commands — called prompts — and some of the results these companies are producing are spectacular.

Two-dimensional art is already breathtaking

If you’re looking for something that transforms words into 2D imagery, it’s hard to beat Midjourney. The company runs its service mainly through Discord and produces stunning images from text prompts.

My wife and I are working on a project to convert our basement into a 1920s speakeasy, complete with a bar, pool table, dartboard, leather couches, and booths to play board games. It’s ambitious; there’s a lot of wall space we need to cover, so we wanted to try some generative art for our walls. The idea was to give us completely unique art in the exact style we wanted in a color scheme that matched our room.

We wanted to create a good 1920s feel in both images from Midjourney. 

Illustration by Midjourney

We had to learn the craft of “prompt engineering” to write the kind of detailed text prompts required to produce the image we wanted. We tried two different prompts for the images above.

Left image: “A 1920s street scene with suited men walking on the sidewalk. People have umbrellas open and it is raining. A tram is in the picture with a red color on the tram. Grainy photograph.”

Right image: “A 1920s Art Deco speakeasy with lots of hanging lights and red leather couches. Old photograph style.”

While the images themselves aren’t perfect — check out the gentleman with an umbrella for a hat on the left of the image — they’re good enough to be hung in our basement. The imperfections even add to the fun of having them AI-generated.

Adobe also released a generative AI tool for Photoshop that can do something similar to what Midjourney does, and perhaps go even further, expanding your images or editing them in new and interesting ways. You can see a lot of problems on the fringes of this expanded image from Adobe, but the potential is there.

While the Midjourney image has a few small issues, the Adobe extension has many more, as seen here. 

Illustration by Midjourney and Adobe

Both tools allow you to create art that roughly approximates what your mind’s eye can imagine, without you having to learn the skills to make it yourself or, in the case of these images, go back in time to take the photos.

“For now it’s a tool that actually helps in many ways,” said Fotis Mint, a popular 3D sculptor, when asked about generative AI’s effect on established artists. “And we should definitely start training and include it in our pipeline.” He also said he’d use Midjourney to create a concept sketch of an idea to help visualize it. “It’s very helpful to people that don’t sketch.”

When I asked him about using 3D generative tools, though, he was less enthusiastic. “I would never use a premade 3D mesh to sculpt. Feels like cheating to me.” Nor would he use something Midjourney had generated as the only source of his inspiration. As Midjourney offers four variations — soon to be 16 —  for every text prompt, you can see how an artist like Fotis could use those variations to inspire his art without copying them directly.

3D generative design still has a way to go

A 3D model of a shield with metal banding

It’s much easier to start from a pregenerated model than try to make it yourself.

James Bricknell/CNET

Though incredible sculptors like Fotis Mint may not find 3D generative AI helpful, for people with very little skill, like myself, the idea is more appealing. 

My personal experience in 3D design is limited. I can make fairly simple geometric shapes like card boxes for board games or, at my peak design skill, a medal based on the Order of Merlin from Harry Potter. For more organic shapes, I’m lost, and that holds true for a lot of the 3D-printing community. The spirit is willing, but the flesh is not. Generative AI, even in its infancy, can help bridge that gap.

3DFY.ai‘s 3DFY Prompt app is a great example of how generative AI could help someone create a base model that can later be improved on. Right now the browser-based tool can generate only a few specific categories of models — tables, sofas, swords and shields, etc. — with a limited vocabulary. 3DFY says it uses only in-house data to generate 3D models and doesn’t use data taken from the web. This narrows what it can achieve, but it’s enough to get someone like me started.

A yellow and brown shield on a 3D slicer app

Having a printer that prints in multiple colors is awesome.

James Bricknell/CNET

One of my first prompts for 3DFY was to make a shield. Specifically, I asked it to create a tower shield with a Celtic design around the outside. It turns out the app isn’t ready for intricate detail like Celtic designs. “We trained our AI model to generate more functional and realistic items so not all creative ideas can be produced,” Eliran Dehan, CEO of 3DFY, said in our conversation. Dehan went on to say that due to user feedback, 3DFY is expanding its variability to include more stylistic choices in the future.

As it turned out, I was happy with what it could offer. From that basic starting point, I could spend some time with a 3D model editor like Blender or an iOS app like Nomad Sculpt to add battle damage and customize it further. Detailing something that already exists is much easier than starting from scratch.

Once I was happy with the design, I sent it to my 3D printer — in this case, the Bambu Lab X1 Carbon — and the end result is a 3D model dreamed up by me, built by AI then further refined to reflect my overall vision.

The GitHub page for Shap-e with spinning 3d models in a grid

Is that a plane that looks like a banana or a banana that looks like a plane?

James Bricknell/CNET

The ethics of it all

I’m not here to argue whether generative AI is ethical, a debate that’ll likely rage for years until proper regulation exists. This article assumes that any art has been obtained with permission and that you’re enjoying the fruits of your imagination without stealing from others.

Despite that somewhat utopian statement, questions abound about the ethical use of generative art. Though AI companies often say the work they produce is original, many artists contend the models are trained on real-world art without the original creators’ permission. Because of this, everything that’s produced could be said to be derivative. Some companies, like 3DFY.ai, use only in-house data created specifically to train their model, but smaller data sets limit the range of the final output.

In the future, especially for 3D printing and 3D models, companies could partner to use libraries of 3D files, such as Printables.com‘s massive collection of 3D printable models or Sketchfab‘s library of models for gaming and computer graphics. There, artists could opt in to having their artwork used to train the AI models. This would give machine learning companies access to tens of thousands of 3D models, without ethical problems.

“It is a Wild West with the training data for large language models now, and so I suspect some are scraping and using the 3D designs to train the models, not honoring the licenses,” said Josef Prusa, CEO of 3D-printer maker Prusa Research and the founder of Printables.com, via Twitter DM. “The subset for models with licenses to even allow that is tiny.”

When I asked Prusa about an opt-in system for designers who use Printables, he said, “We are building the Printables community by supporting the creators, and just the idea of training a model on their unique creations doesn’t feel right.”

Is it overly optimistic to think we can overcome these ethical hurdles? Naive even? Maybe. But now’s the time for us to steer this software in the right direction. AI is simply an input/output mechanic; you put data in and you get data out. Setting guidelines early on as to what is an acceptable input will be the key to making this technology work for us. I choose to believe we can be utopian, but then, I was raised on Star Trek.

The future is bright for those with imagination

The crew of the USS Enterprise-D standing in the holodeck in front of a metal table

The Holodeck is the ultimate maker space. I can’t wait for it to be real. 

Paramount

Troi: Computer, show me a rectangular conference table.
La Forge It’s too high. Computer, reduce the height of the table by 25 percent.
Worf No, the table was smaller. And it was inclined. Computer, decrease the table’s surface area by 20 percent and incline the top 15 degrees.
Riker: No, it wasn’t made of wood. It was smoother, more metallic.
Troi: Computer, make this a metal table.

The above conversation is from an episode of Star Trek: The Next Generation called Schisms, where crew members ask the Holodeck to build them a table they’ve all seen in their dreams. They use voice commands to slowly tweak the look and shape until they arrive at what turned out to be the examination table in a shared alien abduction.

Currently, none of today’s generative AI models can handle this level of design adjustment. Iterative design will be the crowning achievement of AI-powered generative design. Without it, we can’t tweak our models to be what we want them to be. I can see a future where voice-activated design will be easy, helpful and precise, something only possible with an LLM.

My imagination holds wonders, as do countless other people’s. We simply lack the skill to move it from the brain to the real world, and, while we like to tell people that, “You can do anything you set your mind to,” the grim reality is, that isn’t the case. Having an AI that translates our words will democratize art in a way that’s helpful for everyone.

For now, being able to generate your own 3D models won’t take away from professional sculptors. If I wanted a detailed sculpt of my face, wrinkles and all, I would pay someone like Fotis Mint to do it, and it would be mind-blowing. However, if I want a “patio door handle, 30 centimeters long, 25 millimeters deep, with a round edge and two screw holes 10 centimeters in from the edge,” I should be able to do that quickly and easily without commissioning someone to do it.

Though there’s a lot of fear surrounding generative AI, from the conversations I’ve had with designers, coders and hobbyists who are looking to the future, they see AI as a tool to be used in their workflow, rather than a replacement for that work. By automating mundane tasks, or by creating something to inspire, generative AI can free up time for artists to become even better in the medium they choose.

For someone like me, whose progress is limited by constraints on time and natural talent, these same tools can help to get over the first hurdle and make it possible to bring thoughts to life.

If generative AI is here to stay, its ultimate goal should be to unleash all our creativity, but it’ll be a long time, if ever, before it can replace true artistry.


Editors’ note: CNET is using an AI engine to help create some stories. For more, see this post.



Related posts

FCC could ban all new purchases of Huawei and ZTE telecom gear

newsconquest

Apple Tops 2023 Smartphone Shipments Ahead of Samsung Galaxy Unpacked

newsconquest

Catch Kickoff and Save Big With Deals From These Streaming Services

newsconquest