Humans are good at solving big design problems. Such problems require creative and exploratory decision making – skills in which people excel.
Historically, artificial intelligence (AI) in the form of computer agents hasn’t been able to match these sorts of skills.
Engineers employing AI for assistance have traditionally applied it to a problem within a defined set of rules rather than having it generally follow human strategies to create something new.
Now, a US study published in the Journal of Mechanical Design suggests that AI agents can be trained to adopt human design strategies to solve problems.
This research considers an AI framework that learns human design strategies by observing human data. It can then generate new designs without explicit goal information, bias, or guidance.
The study was co-authored by Jonathan Cagan and Ayush Raina from Carnegie Mellon University and Chris McComb, of Pennsylvania State University.
“The AI is not just mimicking or regurgitating solutions that already exist,” says Cagan. “It’s learning how people solve a specific type of problem and creating new design solutions from scratch.”
How good can an AI design agent be? “The answer is quite good,” says Cagan.
The study focused on truss problems because they represent complex engineering design challenges. Commonly seen in bridges, a truss is an assembly of rods that forms a complete structure.
The AI agents were trained to observe the sequence of design modifications that engineers followed when creating a truss. The agents’ observations were based on the same visual information that engineers use – pixels on a screen – but with no additional context.
When it was the agents’ turn to design, they imagined design progressions that were similar to those used by humans and then generated responses to realise their designs.
The researchers emphasised visualisation in the process because vision is an integral part of how humans perceive the world and go about solving problems.
“We were trying to have the agents create designs similar to how humans do it, imitating the process they use: how they look at the design, how they take the next action, and then create a new design, step by step,” says Raina.
The researchers tested the AI agents on similar problems and found that, on average, they performed better than humans.
Significantly, this success came without many of the advantages humans have at hand when they’re solving problems.
Unlike humans, the agents were not working with a specific goal – such as making something lightweight – and did not receive feedback on how well they were doing. Instead, they only used the vision-based human strategy techniques they’d been trained to use.