Today, I spent some time with Hod Lipson who is a professor at Columbia University. In fact, we recorded this episode right out in front of the university with crying babies going by and kids playing in the park. So it’s a little noisy, but I’ve been inspired by Hod for a long time, because he’s another inventor that worked on 3D printing early on. He is at the forefront of what we’ve been able to do with computers. That’s the kind of thing I’m always really interested in.
He was actually inventing 3D printing at the same time I was, a long time ago. We get to have a conversation about that…both of us were probably the two people who worked on inventing 3D printers for food. And Hod has since gone on to do a really cool side project trying to create a robot artist and it’s called Pix18. It’s not like any other creative robot that you’ve seen or heard about. Honestly, this is a difficult thing to get your head around: can a robot be creative? And that’s hard for humans to accept. And so of all the people on earth to have a conversation with about this topic, I probably couldn’t do any better than Hod Lipson. Towards the end of this episode, you’ll see. It’s pretty exciting because Hod manages to really blow my mind.
Pablos: You’ve been at Columbia for the last couple of years. You were at Cornell before. For how long, fourteen years?
It seems like a long time. Why make that change?
The change ironically is to be closer to people. I moved to Columbia years ago in part looking for that energy that comes from collision density from the fact that you meet people that are doing all kinds of crazy things. They don’t sound like they’re related to engineering necessarily, fashion, architecture, retail and medicine, you name it. It turns out that once you indirectly get all that energy around you, you start creating new things. That’s part of that. That’s gone a little bit. Hopefully, it will come back.
I never had a plan for my career but looking back, I can pretend I had one. The main way that I can frame it is that I always wanted to do new things with computers. I had this big superpower that came from having computers and the whole world hadn’t adopted them yet. You could see it in every business, every industry. As an inventor, I’m looking around for places where the computers hadn’t gone yet and trying to get there first.
I remember you gave this talk when you said, “You can put a computer into this microphone, chair and into everything and they could do something they didn’t do before.”
You’re here surrounded by people who are trying to be creative in a lot of different areas but they don’t know about the technology. The truth is you’ve inspired me even unknowingly over the years because your projects have exactly been that. You got to some of these things before I did. One of them is in 2008 or 2009, I was working on trying to invent 3D printers to print food. Because I talk a lot more than you, people think I invented that stuff. You did it a decade before me or at least years before me, you were printing food or at least had worked on the idea of chocolate or something.
We printed chocolate in 2006 or something like that.
You were there even before me. I didn’t know that at that time but that’s what I mean. It sounded crazy when I did it. It must’ve been even crazier when you did it because it still sounds crazy. It sounds less crazy every year.
Especially now, suddenly people are saying, “What’s the new future of food?” You can’t go to restaurants as easily. I’m saying, “Let’s marry software and food, as you say.” Take software away where it’s not been before. Food is a big piece of our life. Software is a big piece of our life. Let’s put them together and see what happens.
I’ve watched the progression of that one. There are a few robotic restaurants like Spyce in Boston where they’ve worked out automated meal prep using machines. I’ve been thinking it’s one of the most important things for robots to do because the old way is lots of grubby teenagers, lots of ingredients rotting in the back of a restaurant. A huge amount of work to sterilize environments, manage food safety and robots are good at all that stuff. At the time I started working on it, it sounded crazy to everyone but they’ve seen 3D printers and more robots. They spend more of their life in the last decade with computers, smartphones and stuff. They’re more open to it. Now that they don’t want to touch anybody anyway, I think it will be the right time.
It’s going to happen in the killer app is synthetic meat which is a thing on its own. For many reasons, people want synthetic meat. 3D printers are perfect for synthetic meat because you can do more than a hamburger patty. You can start making interesting things.
One of the problems with meats, in my understanding, having spent a little bit of time with the Memphis meat guys and stuff, meat from a cow has a lot of vasculatures, gristle and all these textures that you’re familiar with and become so used to. The hard part with synthetic meat is texture. With 3D printing, we can start to put that texture.
We’re also evolved to be very suspicious of if it doesn’t have it. If you touch something and it doesn’t have the right badness to it, it’s not that kosher.
I almost hesitate to bring this up. I know a startup that was working on making lab-grown foie gras. I thought that was a genius place to start because foie gras is premium, low volume and also the ethical issues make people nervous about it. The texture doesn’t matter because it gets blended anyway. It’s the perfect meat to start with. I haven’t checked with those guys in a couple of years but that seems like you start with foie gras. If you can make that in a compelling way, then we can use machines like 3D printers to go and start adding texture to the foie gras. If you want something that’s got a little more of a bite to it, we could work from that ingredient up to New York strip steak.
Foie gras can be so esoteric that is not as you say, I have a caviar machine and most people don’t eat caviar. They won’t grab their way to get that. If you make a steak machine, now you’re talking something that half the people want. That’s a little bit of nuance but the bottom line is that these machines, the meat and everything that’s happened with COVID and the environment, although that is taken second to see to everything else, make people suddenly recognize that we should rethink food and that’s a door. That’s an opening for it. Somehow the candy, confectionary, pastries all these other things that we tried to do with food printing didn’t quite take off but this might. That’s my gamble. We’ll see what happens.
It’s very difficult to gauge but I’ve had more than a couple of different investment groups since COVID asked me about food tech. It could be time. In your mind, why do you correlate lab-grown meat with the need to automate meal production? Why is that a killer app?
It’s a killer app because of what we talked about this is uncanny value.
This is a way to make lab-grown meat more compelling.
People always asked, “What problems does it solve? What’s the problem with today’s meat preparation? It takes effort, but what’s the problem?” I didn’t even have a good answer for that, but there was this Frankenstein element to it. You’re putting it in a sausage machine and you put in these ingredients then something comes up but when synthetic meat becomes an option, it suddenly legitimizes this whole area of making synthetic food. It’s a legitimizing force.
I hadn’t thought that too. That was cool. I eat sausage. How hard could it be to take the lab-grown meat throw in some sausage with other things to make the texture compelling?
If you want to make burgers and sausages, sure, but if you want to make anything that has texture.
I go back to thinking like, “I got a steak on my plate but the truth is I’m going to use a knife to cut it into little pieces so why not make the little pieces.” There’s a lot of romance around food.
It’s very simple because our brain is evolved to validate that it is of good quality. If you eat something raw that’s a little bit rotten, you are dead. It’s some serious things. Our brain is very sensitive to the meat not looking right, not tasting right, not feeling right and not smelling right.
I’ve noticed that there’s something equivalent to Zoom calls. My brain is sensitive to a lot of things that Zoom doesn’t do. I was hanging out with some guys and they said they hired someone who they’d only met on Zoom. When she showed up for the first time, she’s 6’3”. It wasn’t one of the questions they asked her. It’s no problem but it was shocking because everybody looks like they’re the same height on Zoom.
People don’t look you in the eye on Zoom, at least until this AI thing.
Even though they’re trying, they can’t.
It’s how you can look at the camera for a while.
It’s hard. I got a teleprompter so I can do it but the other person, unless they have it, they’re not doing it. There are a lot of things like that where we’re figuring out or starting at scale to understand. Another one I learned is the maximum latency that your mind can handle in audio before it starts to freak out and not think this is a real person you’re talking to and we talk over each other was 180 milliseconds. The average Verizon call in the US right now is 350 milliseconds. It’s a regular cell phone call. That’s why calls suck so bad. It’s not like the calls when you were a kid. You could call the United States from Israel and you’d have less latency. It was all analog. Even though there was latency, it wasn’t that bad. Now, an average call is so bad that my brain doesn’t believe the person I’m talking to is real. On Zoom, something like that is happening.
Everything is like, “No, you go ahead.”
It’s horrible. I don’t understand if you’re right about the uncanny value of meat, then what’s going on with sausage? How come I can eat hot dogs? Maybe Israelis don’t eat hotdogs.
I do know how it was to eat the first hotdog.
Kids love it. We’ve indoctrinated kids, so it seems normal.
I bet if you take somebody who’s never eaten a hot dog.
We learn new textures all the time. One of the ways I’ve been defending 3D printers is to say we couldn’t print a steak or French bread but we could print a new and compelling texture. The way I defend that is by pointing to cliff bars, smoothies and things that humans had to learn Fig Newtons. These don’t grow in nature but it was easy for us to adopt them. Doritos, that’s not something that God created. Pasta isn’t created by God. It doesn’t grow out of the ground.
We haven’t questioned the psychology of food. I’m sure there’s a lot to discover there.
We’ll find somebody else to harass about that. Robotics was the thing to track here. If there was a unifying theme, it’s robotics.
I’d say the unifying theme is AI.
When do you think you first would have started expressing it that way?
I was a Navy Engineer for many years before I did my PhD. In Israel, everybody serves in the Army one way or another. I was an engineer and you think military engineers are making these fancy things but a lot of it is you need to install a microwave in a ship. Engineers will spend years studying all that stuff. They’re working with these kinds of problems. It takes a lot of time. There are no cutting corners. It’s important stuff, but I was thinking a machine can do this. There is an automated way to generate these ideas. I’m not talking about creativity at the level of a patent or discovering something new but all this stuff that we do that’s relatively mundane but it requires generating new ideas. Is there a way to automate that?
To me, it’s the root of it. I started with that. When I finished my term, I went to do a PhD and that was the topic I was looking at. I started off with creativity. It was more formulated as design automation. It is the smaller, less exotic version of creativity. You just want to design something automatically. It doesn’t have to be Picasso. You want to be able to say, “I want this thing installed here.” You can go figure out what needs to be done.
“I want to park in the middle of town. We had this much space. Go figure out how.”
This is a little bit of a design. How do I arrange all the buildings so a maximum people can go through in there’s parking and yet everybody can see the bay? You put these constraints, you hit enter and the machine finds a solution. This is very different than most software tools that are about analysis. You can give them all kinds of things and then you can say, calculate the cost, calculate whatever the sequence of construction, calculate the materials whether it will break or not. That’s analysis but to find a new way of arranging things that solves a problem, that synthesis, that is hard to do for humans and it’s hard to do from computers. That was my goal from the beginning in a very small way. Over time, I’m going for more and bigger goals.
This is amazing because what you’re describing is much a fundamental process. We need to be able to automate how we design any situation or thing with a given set of constraints or values. What you’re saying is over time, you’re finding bigger things to apply that too. In some sense, trying to go further with it. You described what you were working on as robotics or automation, I’m guessing when it became reasonable to describe it as AI. Years ago, we didn’t have enough computing horsepower to make any of this AI stuff particularly compelling.Exponential technologies make the rest of history look flat. Click To Tweet
Every year, people say that. Back in 2000, people said, “It was different back in ’99.” That’s the nature of exponential technologies. They make the rest of history looked flat and in 2019, it is always far more than anything you’ve seen before. Every single year, everybody thinks this year is different.
You’ve done a lot to try and convince people of this and show how these exponential curves play out. I had been paying attention to that and lived through those cycles enough times, I was building a better intuition about how the future plays out on those exponential curves and how our tools develop over those timelines. It may be better because I could invent for the actual technical future instead of the linear projected one that most people would. Do you feel like you’ve built a better instinct for that?
I have a little bit of an artificial instinct. I don’t feel it but I know that in ten years, computing power is going to be 1,000 times more in a decade. I don’t feel it but I know that that resource will be so abundant. If something takes 1,000 times more computing power than I have, it’s okay.
At this point, it will be easy for you to pick up a one terabyte hard drive and throw it in the trash but, can you feel like in years, we’re going to hold a petabyte hard drive?
You can’t feel that.
I don’t imagine if I have 1,000 cell phones in my pocket which is what I’ll have in ten years, what does it feel like? It feels bulky. What can I do with it? What can you do with it?
We have 1,000 cell phones in our pocket compared to ten years, we wouldn’t know what to do with it.
A lot of people understand this logically but they don’t understand it intuitively. That’s the nature of it.
When I think about this progression of what you’ve described as automated design, I think of it as using these tools to help us make better decisions because design could be designing a park or designing microwave installation in a ship but it could also be designing a better set of policies for a municipality or it could be designing a better Master’s degree program for one of these students here. When you think about those tools, one of the issues that always comes in my mind that people haven’t expressed is using the tools. You described this as you’ve got to set all these parameters.
Setting parameters and another way of framing this and you need to ask what you want to be able to express better models, what your values are, what you care about. Otherwise, you don’t get answers. Now that we have all these conversations about bias in AI and this stuff. What people are waking up to is that we haven’t got clear about our own values enough to express to those algorithms so the algorithms can’t give us answers that we’re satisfied with.
That’s the next step. One of the reasons why you can have design automation to design a bridge but not the design of policy is because with bridges, we much agree on what we want. We want it to be strong, cheap and easy to maintain. With policy, we cannot quantify what it is that we want. This is important because it’s not enough to talk about it in big words, you have to be able to say it in a way that a machine can improve. The little secret behind design automation and creativity is that you need to be able to analyze before you can synthesize.
You need to be able to make predictions before you can design. You need to know, “If I do it this way, it’s going to be good. If I do it this way, it’s going to be a little bit better,” and then the computer can find its path. It’s very good with bridges, aircraft, and things that it can analyze well but when it comes to predicting the outcomes of a policy, there’s no simulator out there that can do it. It’s very hard to design. We don’t know what to ask for and even if we knew what to ask for, there are so many unintended consequences so it’s hard to predict what’s going to happen. It’s a double problem.
One of the examples that comes to mind that plays out popularly is the question of, if an automated Tesla has to make a choice between killing a pedestrian or killing its passenger because those are the only two possible choices, which one does it make? We’ve had cars with drivers making equivalent decisions since we’ve had automobiles but we had no control over which choice to make and now we do. In some sense, the technology-enabled us to make a choice that we didn’t have before. We’re being forced to make that choice proactively to tell the car what we want it to do whereas before, we had to live with the entropy of human drivers making that choice in the moment.
It’s the same thing with bias. Bias existed all the time but AI exposes that.
It gives us the ability to make a choice about it.
Now, we suddenly have a choice. We can’t say we don’t know that it exists because it’s in our face and it’s documented. We now have no choice but to deal with it in a way that we couldn’t before.
People’s knee-jerk reaction seems to want to blame the technology for being biased. What do you think about that?
I didn’t know how prevalent that is really. I’ve seen articles about this. I never know if you stop the person on the street and whether they think that. I don’t think a driverless car is biased. Somebody might write an article about this. You can blame the media for hyping the bias thing.
I tend to blame the media for hyping the scariest possible interpretation of everything.
I was talking to a science fiction writer. They said science fiction helped with technology, AI and robotics. In my position, it’s a detriment because it outlined so many bad things that can go wrong which is important. It’s always humans against the machine. Humans either lose or win. It’s never a nuance coexistence but when you have literature about humans, it’s nuance and you can have characters. It’s complicated and there’s multifaceted and antiheroes but when it comes to technology, it’s very black and white. Why can’t we have something that’s a little bit more nuanced, complex and multifaceted?
I’m so thankful that you described it that way because it’s lazy and irresponsible.
It’s a lot easier.
Scary stories sell and Hollywood using AI as a boogeyman for every story now. It’s giving us this distinct lack of positive possible futures and they’re important. We need science fiction authors to be helping us. That’s what Star Trek was about. We have a whole generation that grew up with cool stories about technology from Star Trek. To some extent, that’s why we landed on the moon. You could thank Heinlein for that. What modern science fiction authors are giving us as possibilities seems a lot of dystopia. Personally, I’m trying to boycott dystopia.
The same thing let’s say with social networks. I don’t know if you’ve seen The Social Dilemma.
I did watch it. I don’t want to talk about it.
It’s painted a very bleak picture but it doesn’t talk about all the good things. There are so many good things and it’s a question of balance. If we talk about the bad things, we lose sight of the good stuff.
You have some experience teaching here and talking to the students. What comes to your mind that is something you’re excited about that’s a good thing? What’s a technology that’s on the horizon or is becoming practical or something where you can see how this is going to make things better and people don’t even know?
The number one thing is health diagnostics. That is so ripe for disruption and you’re seeing it everywhere. Everywhere we take AI with all the sensors that we have, you get better detection and diagnostic and it’s not about beating a team of doctors at Stanford, although that’s not a big deal. Think about how many people on the planet don’t have access to doctors at all. Suddenly, you can detect skin cancer from a camera, pneumonia from the X-ray or breast cancer from this low-cost machine. That is going to save millions of lives and untold misery and that is already working.
That’s a good one. We worked on one of the first AI-based diagnostics using an automated microscope for malaria. That’s a hard diagnostic because you have a parasite that’s ten microns. The diagnostic is a human staring into a microscope counting cells for an hour. It sucks. Most of them suck at it. Most countries are lucky to have one person that’s good at this test. We do a billion to see your family. This was at the Intellectual Ventures Lab and we embedded an automated microscope, they could take the same a pinprick of blood and it could look at those slides and using neural nets.
We eventually got to the point where not only does it outperform the best humans on Earth, it’s cheap, reliable and we can make a lot of those machines. The thing is now interpreting the slides, it’s finding malaria in samples where humans couldn’t find it but we don’t even know how it works. There’s was another conversation about transparency in AI and understanding how the algorithms work. In some cases, we have algorithms that perform so well but we aren’t capable of understanding them. It’s fascinating and amazing in some cases like, “I don’t need transparency to know how it figured out malaria.”
Frankly, when it comes to medical diagnosis, there’s transparency anyway to most people like if you go to the doctor. I have to keep reminding people, half of the doctors are below average. This is a well-established fact and it’s a fundamental truth. Everybody has this example of this amazing doctor that can do whatever, but most people can’t and then you don’t get answers. When they miss a diagnostic then terrible things happen. This can happen. It’s going to happen fast. It’s for many reasons. That’s a no-brainer.
The proliferation of the sensors, the way every new Apple Watch gets yet another capability to monitor you 24/7 modular charging.
All of that feeds into all this AI.
I am so excited about it. I agree with you. It’s exactly what I was asking for. That’s a major frontier and people don’t see it coming.
You can take that and do these medical diagnostics. If you do it in agriculture, it’s the same thing. If you want to take disease in plants, it’s the same thing. That’s also a big deal in terms of yield, crop disease, all of that. Agriculture is another big thing. We use terrible techniques like spraying an entire field because we cannot detect the disease fast enough. We spray the whole thing in advance but you don’t need to do that if you have AI. Anything that has to do with diagnostics is inevitably going to be transformed. I would say it’s hard to argue why that would be a bad thing except for jobs. Even jobs, we don’t want to keep people from medical diagnostics to increase jobs for doctors. I don’t think that’s good.
Back in the old days when we were doing lectures on stages, I would always start my talks with a slide showing the population growth in human history. That curve looks like flat until the last couple hundred years and then it goes from millions to billions. It’s the ultimate hockey stick growth curve. I would often say, “Look at this curve. In other way to read that is that we made a few billion jobs in the last couple of hundred years.” We can make a few more. People are terrified about how robots can replace jobs but that’s not what’s happened. We make more people but we make more jobs. I’m looking at that on a global scale and on time prizes. Not to diminish the suffering of any particular person who lost their job to a robot but overall, humans have found things to do.
The way we structure things financially, most people don’t care about long-term jobs. They quit their job tomorrow. This is why this discussion is hard to have because academically, I’m thinking of a long-term, but when you’re talking about you have to feed your family and you need a paycheck next month, this discussion is completely irrelevant. It’s in fact antagonistic. This is a little bit of the mismatch when academics talk about jobs versus people who lost their job talk about jobs.
I’m cheating all the time by looking on monitor time prices.
I’m a little bit more sensitive to that also in academia that we completely have this luxury to talk about long-term stuff, but that’s not the reality for most people.
It feels like almost a decade ago when you first started trying to do artistic robots or robots that could create. It’s been a while. What were you thinking at the beginning? Can you channel what your early perspective on it was? What did you think was going to happen?
Before a lot of the AI tools that we have now are available, it started with this thing that I always wanted to paint. I’m not a good painter but I know how to build robots. It doesn’t take a lot to connect the dots there.
There are few things I’m not good at.
Generally speaking, if you’re good at making a robot and you’re not good at something, then you make a robot that does that. That’s been my formula.If we only talk of the bad things associated with technology, we lose sight of all the good stuff. Click To Tweet
You’ve been able to outperform a good number of actual painters.
I prefer myself for sure, but it culminated in this course that my wife and I were taking for painting. We took this course. It was a local artist. We’re taking this for a couple of months and then it came a time to renew the class. We weren’t sure we were going to renew it. It was an expensive thing. The instructor came to me and I’m sure he’s going to talk me into renewing it. He said, “Painting isn’t for you. You should stick to engineering.” He laid it out.
He thinks your wife had more promise.
He did. After being fired from painting class, I decided I’m going to start building it. I started off working with a student on this and he built the first iteration. It was a very cool robot. He got his Master’s degree. He went to work for Kiva and he made his fortune there. I couldn’t find any other student to work on this. This is almost a course in academia.
You had such a great proof point. It’s like, “Do this robot with me and you’ll go become a millionaire in Kiva.”
What happened is that engineering students don’t want to do art and art students don’t want to do engineering. There are many reasons, cultural mostly. If engineer does art, they go and they put it on their resume, the big engineering people will say, “What are you doing? You’re an artist. Why applying it?” The art people think engineering is not creative. I ended up doing myself and it’s been my hobby ever since.
First of all, the problem you described has deep effects on what humans are accomplishing by separating engineering and artists, and not being able to see a future in combining them. We have a long history of problems that we had to overcome in the computer industry because we started with all engineers and no artists. Up until early 2000, the term UX didn’t exist. User experience wasn’t a thing. That was something you hire somebody to pick the color. It wasn’t until iPod when Apple proved that design matters. That was a way for every other company to justify getting people in the UX. That was that inflection point but now we’re learning it all over again in some sense that you described.
It’s the other way around also. Artist feel like engineering are always crunching numbers but there’s no creativity there where I would argue if you were designing a bridge, there’s a lot of creativity there and it’s not necessarily in the traditional of creativity. It’s not big C creativity as in painting and music but it’s a different creativity. Creating a new amplifier circuit is a very creative process. If it’s different than any previous amplifier circuit, it’s no different than creating a new song. There’s this dichotomy and there are a lot of things that could happen in the world if these things combined. Practically speaking, I ended up doing myself and I’m happy that happened because it’s a fun thing to do.
You’ve been building robots that can paint.
I have the version four.
Can I ask you about version one? Was that a very CNC thing where you feed it a picture and it paints the picture?
Version one was more of, “Here’s a photo. Paint that photo but I’m going to give you constraints.” You can paint it only with straight lines, you can paint it only with these three colors or you can paint it with no more than 50 strokes. You give it some constraints because design is always constraints versus goal, as we said earlier and then the machine figures it out. We use the evolution, which is my favorite inventive approach.
It’s the one with the longest track record.
It’s still the most innovative in terms of what it can do. It thinks outside the box and the things that came out were amazing. If you try to paint a portrait with only twelve lines, you get some interesting things. Somebody could argue, you were the artist because you chose twelve so it blurs the line. What I’ve been trying to do ever since is blur the line even more, keep blurring it, shifting the line and moving it towards more and more than I say, almost nothing. I pay the power bill, that’s it. I buy the paints but I’m trying to clean up after the machine but I’m trying to remove myself. The experience that the robot has. I’ve talked about this a couple of places and they said, “He’s not an artist because you control this experience. It can only paint from things that you give it. It looks at all the pictures and it paints whatever it wants that it learned from those pictures.”
You said it’s the 4th robot. This is Pix18.
By the way, I’m talking about the AI. The physical body of the robot is completely ordinary. It’s a gantry. A real artist, you don’t usually care about the body of the artists, you care about the mind.
You characterize this as the fourth generation of AI that you’ve built, the mind of the robot.
There are bigger and bigger bodies, and that’s a whole other discussion.
We understand that. It could be swapped out and you could have brain to it. Describe it to the people who don’t know.
The first thing was more or less like here’s a photo, paint it with some constraints. It was a collaboration between an AI and a human. The next generation was, here’s a photo. Do whatever you want with this photo. I removed myself from the constraints, but I still give it a photo. It’s based on that. The third generation was I give it a set of photos, a movie or a video of some experience like videos of Columbia University. It’s not about choosing a frame but it’s more about what does all this create.
The fourth generation is working now. When I did this third generation, people said, “You decided to show it Columbia. It’s a bottomless pit.” I said, “I don’t want to even say that. I’m going to hook it up to the intranet. I’m going to give it access with Google API. It can go places on Google Street View. It can see places. It can go to any if it wants to see whatever it wants to see within reason.” It has safe search. It can see and it does what it wants and it’s going places. Sometimes, I look inside and I see where he’s going.
Is there no a shredding groups cat problem with that? If you look inside, does that affect its choices?
I slow it down a little bit in that place. Humans aren’t completely neutral either. Humans are affected by other things, they’re affected about what sells and what doesn’t.
As far as I can tell, most of human artists are affected by whether it gets delayed or not. What’s the net effect of this now? Describe what PIX 18 do.
The net effect is that it creates bizarre paintings.
Are they incomprehensible to humans?
They are incomprehensible unless you see what it’s seen. If you can see, this robot has been seeing a lot of bicycles in Delhi and now you can see where it got its inspiration from and why it’s doing these circles. It’s not abstract so you know where it’s been. Imagine you had an alien artist. They cannot talk to you and you don’t understand how he thinks but all you can do is you can look back at what it’s experienced. Like idiot savant that cannot communicate about what they’re doing, but they are good at doing the thing. This is where we’re at. It’s a very interesting journey, I would say.
My understanding, to be clear, is that it goes and it wanders around into some place on Google Maps. It chooses that randomly or I don’t know if it has the capacity to get aboard and go somewhere else but then it finds things it likes or is inspired by. It amalgamates some inspiration or collection of attributes from that. It composes a painting that the inspiration is a real word and then it paints that. The mechanical aspect of painting was fun to figure out, but we’re getting nowhere that for now, since that’s not where the real implications are. You’re saying so far, most of what it’s done is I get some impressionistic circles and colors that you might see if you were hanging out in Delhi that will be different than what you get in Tulsa, Oklahoma. If you went and wandered around Delhi yourself in real life or on Street View, you might get a feeling that goes with that painting.
I did a couple of tricks where I have a log file so I can see where it’s been. I say save what you’ve seen. It’s like a kid that I can go walk around but I can’t see nearly the quantity of things that it can see. The interesting thing is that machines can experience the world in ways we can’t. They can go places we can’t. They can walk simultaneously in two parts of the world. They can do things we, humans, cannot. We, humans, can do things they can’t. I don’t think it’s a competition or can a machine beat a human artist. It senses the world in a different way and it’s very interesting.
Have these artists managed to get laid by impressing other robots? How do we measure success here?
The ultimate question is can it sell paintings? That’s an easier metric. It’s less binary. There’s a gradient there, how much? That’s still hard.
It hasn’t sold any paintings yet.
It has sold a few.
Does it have its own Etsy account or something?
There are a lot of bots trying to sell shit online so we could co-op one of those without you having to do some of the work.
I want to keep tabs. I’m using the Andy Warhol’s definition of art which is, “When somebody that doesn’t know you personally buys it, then it’s art.” There’s no other definition otherwise, this brick is art. It has to be paid for by somebody who doesn’t know you. That’s the only criteria. If I have to say what my goal when our project is there, I have a very concrete goal and that is, I want to divorce and liberate art from the artist. Up until now, art had this parasitic dependency on an artist but now art can be independent.
I think Banksy is not an artist by the definition you stole from Warhol. His art is not for sale. It’s just creations.
Would people pay for it if they could?
Probably. At this point, there’s a strong enough brand that he can sell coffee mugs.
You can refuse the money if you want.
He’s refusing money doesn’t count. This is interesting because you’ve invalidated a whole class of artists who have failed to make any money but it made a bunch of things that they would characterize as art. It’s not you your definition. Warhol will take the hit for that one. This steers the conversation away from creativity because you’re defending it’s an artist. Are you also contending that this artist is creative?
Does Andy Warhol have a definition of creativity?Humans no longer have a monopoly on creativity. They are no longer the center of the universe. Click To Tweet
I don’t know. That’s a good question.
First of all, it’s not a black and white thing but it’s able to make something that wasn’t there before.
That sounds good to me. I don’t know if it’s black and white, but if we take that, that’s fair start. Your robot artist is making things that weren’t there before so is my random number generator which I’ve had. It’s a shitty one but it’s gotten progressively better. Is that creative?
In a very tiny way. This is why I said it’s not a black and white thing.
Is rolling dice creative?
I’ve had a whole paper about this and it has to do with one of the chances that that thing wouldn’t appear there spontaneous. The chance that a human would appear out of nothing by Adam bumping into each other is small. It’s not impossible. Creating a human, designing a human for evolution, or whatever you want to imagine is creative.
Maybe creativity is taking a breakpoint of probability like rolling a pair of dice. That doesn’t feel very creative because chances of getting a seven are high at any given moment.
If you know how to throw a dice that you can create ten sevens in a row, that’s creative.
It’s more creative than one seven and more creative than nine sevens but not nearly as creative as eleven seven. We could distill creativity down to some probability meter. You can set a dial somewhere and you say, “Below this isn’t very creative and above this is creative.”
The other thing I have to add to this is achieving some goal which is hard to say but it’s hard to assemble a rock in this particular shape but still, we wouldn’t consider that creative. That’s to satisfy a goal and this is where it’s a little bit subjective.
The goal is Andy Warhol’s definition of art.
If you want to say art is something that creates emotion in somebody that observed it, that’s another definition, then you can measure that.
Now, we have two possible competing definitions for art. Andy Warhol’s and some viewer feel some emotion which might be possible with your robot’s art.
There are holes in that definition as well. There are lots of things that will create emotions in you but not art. There are things that’s going to create negative emotions and they’re not art.
Not everything that creates emotions in you is art. If your robot outperforms Jackson Pollack at the auctions selling his art and it becomes legitimized in art world as an artist by Andy Warhol’s definition, then some asshole is going to come along and say:
“It doesn’t generate emotion in the viewer the way a Pollack does so it’s not an artist.” What are you going to say then?
Aren’t people always arguing about what is art?
That’s what the point of Pollack seems to be.
I don’t think one can solve this. My only point here is that humans are not the only entities that can create art. This is a very new perspective.
Humans are not the only creative force.
This has never been said before. We had monopoly on art. We, humans, on creativity in general and that is now being challenged. Human is the center is no longer in the center. It’s the center of the universe, the top of the evolutionary pyramid, all these things that humans are supposed to be unique, but they’re not. That’s another one of those.
On a long time horizon, once we get past this semantic argument and we have more creative machines, we could end up in a world where hopefully, we coexist with another abundant creative force of machines.
The amazingly exciting thing about this whole thing is you can invent all kinds of solutions to problems, but if you can invent a machine that can invent solutions to problems, that’s the ultimate win.
We should do because we’re getting increasingly sucky of that.
There are more problems because we are opening so many possibilities so let’s create a machine that can solve problems. That’s what creativity is about. There’s going to be unintended consequences but it’s an incredibly efficient way to put our intellectual capacity. By the way, it’s the same thing with 3D printers. Instead of making a machine that can make something, make a machine that can make anything. Make the machine that can make any machine that can make anything. There’s so much more bang for the buck. There’s so much more leverage. If you can make universal machines, there was the idea behind the computer. Don’t make a machine that can tabulate insurance tables but make a machine that can do any calculation and then make a manufacturing system that can make anything. It’s paid off. Let’s create any problem-solving machine that can solve any problem. That’s what I’m after.
You have managed to articulate that in a better way than I was expecting. That’s great. Thanks so much, Hod.
It’s my pleasure.
Before we wrap up, what’s the website for PIX 18?
I appreciate the time. This is brilliant.
I appreciate that you’re doing shows in these times where people are chasing the tails with negativity. Thank you.
About Hod Lipson
Hod Lipson is a professor of Engineering and Data Science at Columbia University in New York, and a co-author of the award winning book “Fabricated: The New World of 3D printing”, and “Driverless: Intelligent cars and the road ahead”, by MIT Press (translated into 7 languages). Before joining Columbia University in 2015, Hod spent 14 years as a professor at Cornell University. He received his PhD in 1999 from the Technion – Israel Institute of Technology, followed by a postdoc at Brandeis University and MIT. Hod Lipson’s work on self-aware and self-replicating robots challenges conventional views of robotics, and has enjoyed widespread media coverage. He has also pioneered open-source 3D printing, as well as electronics 3D printing, bio-printing and food printing. Lipson has co-authored over 300 publications that received over 20,000 citations to date. He has co-founded four companies, and is frequent keynoter both in industry and academic events. His TED Talk on self-aware machines is one of the most viewed presentations on AI and robotics. Hod directs the Creative Machines Lab, which pioneers new ways to make machines that create, and machines that are creative.