I know this is currently a hot topic in not just the art world but several industries, so I want to touch upon a subject that isn’t heavily debated as much.
But over the past 6 or 8 months, I’ve gotten sick of both hearing and seeing the word “A.I” get constantly thrown around like a new buzzword.
Call it machine learning or maybe even computer training, but nothing I’ve seen to date suggests that our electronic devices actually have a mind of their own. And in fact, calling it “A.I” has lead to underwhelming expectations.
For example, when it comes to Chatbots there have been allegations that politically it leans a certain way.
This is not true A.I. And I think there’s even a huge societal danger if we introduce Robots that automatically come with a far-left or far-right slant. Sorry, but machines should stay neutral and be allowed to evaluate all sorts of evidence, not just one.
Then of course we have the popular “A.I Art Bots” like Stable Diffusion, Midjourney etc. Again, I struggle to find the intelligence in these programs when they make nonsensical errors like incorrectly rendering the wrong set of fingers or have trouble interpreting words or sentences that have many meanings.
I have no doubt that technology will get better. I broke out of the delusion of "but it can never do this! " a long time ago.
But until science does manage to create a virtual brain that can think for itself or make decisions that don’t come pre-installed with someone else’s bias, then I feel all these technologies being marketed with “A.I” are going to suffer for a long time.
In fact, I mentioned earlier that Art Generators are still not perfect and they get rendering wrong before a Human cleans it up. Well, imagine trusting this same technology with your life? If you got on an A.I Airplane or an A.I Bus and it proceeded to drive itself off a cliff, then that’s a mistake you can never recover from. And in this situation, who do we hold accountable? The robot? The company who manufactures it? Or the Scientists who pushed this technology out prematurely?
On the other hand, I could also see an argument that perhaps building robots with the same mental capacity as Humans could also have its own set of consequences. For example, we have 8 billion humans on this Earth, but if a Single Robot can work harder and think faster than all of us, then what exact purpose do we have left now that we’ve built our replacements that’s superior in every way?