I wish it wasn't called "A.I" (an alternate viewpoint on where technology is headed)


#1

I know this is currently a hot topic in not just the art world but several industries, so I want to touch upon a subject that isn’t heavily debated as much.

But over the past 6 or 8 months, I’ve gotten sick of both hearing and seeing the word “A.I” get constantly thrown around like a new buzzword.

Call it machine learning or maybe even computer training, but nothing I’ve seen to date suggests that our electronic devices actually have a mind of their own. And in fact, calling it “A.I” has lead to underwhelming expectations.

For example, when it comes to Chatbots there have been allegations that politically it leans a certain way.
This is not true A.I. And I think there’s even a huge societal danger if we introduce Robots that automatically come with a far-left or far-right slant. Sorry, but machines should stay neutral and be allowed to evaluate all sorts of evidence, not just one.

Then of course we have the popular “A.I Art Bots” like Stable Diffusion, Midjourney etc. Again, I struggle to find the intelligence in these programs when they make nonsensical errors like incorrectly rendering the wrong set of fingers or have trouble interpreting words or sentences that have many meanings.

I have no doubt that technology will get better. I broke out of the delusion of "but it can never do this! " a long time ago.

But until science does manage to create a virtual brain that can think for itself or make decisions that don’t come pre-installed with someone else’s bias, then I feel all these technologies being marketed with “A.I” are going to suffer for a long time.

In fact, I mentioned earlier that Art Generators are still not perfect and they get rendering wrong before a Human cleans it up. Well, imagine trusting this same technology with your life? If you got on an A.I Airplane or an A.I Bus and it proceeded to drive itself off a cliff, then that’s a mistake you can never recover from. And in this situation, who do we hold accountable? The robot? The company who manufactures it? Or the Scientists who pushed this technology out prematurely?

On the other hand, I could also see an argument that perhaps building robots with the same mental capacity as Humans could also have its own set of consequences. For example, we have 8 billion humans on this Earth, but if a Single Robot can work harder and think faster than all of us, then what exact purpose do we have left now that we’ve built our replacements that’s superior in every way?


#2

And one more thing to add, I’ve been following machine learning for a decade, and even managed to use it before it blew up in popularity. So my stance is not one that is deliberately “pro” or “anti” technology.

I found A.I denoising technology (as it was called) to be very helpful but also espouse the same flaws I wrote in the OP.

Without a real brain, the Computer would always airbrush over detail that I wanted to save. And sure, there was some levers that helped guide it better, but it gets worse when the detail I wanted was extremely granular that I would have to explain in words why it shouldn’t be removed.

So for those who say A.I is a tool or is the future, well, not quite. Usually with a tool, you want it to act and behave a certain way that you’re not spending too much time fixing its mistakes. Like a Hammer that can hit nails. It does so perfectly with each strike. But imagine if the Hammer you’re using would randomly twist or bend around that it produces a 50% chance of breaking the nail? It leads to problems…

That’s my concern with this technology and how its marketed. The flaws are baked in as a result of it falling below Human Consciousness.

Edit: And speaking of the “tool” part, another important distinction is that all these A.I software only become useful when they are trained on pre-existing data. This is a huge difference when compared to software like Photoshop or 3DS Max that don’t actually come loaded with other features (other than the few stock assets like a polygonal Cube or Cylinder). Thus further complicating the use of “A.I” or Machine Learning.

Some may like A.I because it’s fast, helps them save money or it enables them to do things they normally couldn’t do on their own. In which case, I can definitely see that argument.

However, I will say again it’s still important to temper your expectations. For example, in my current field working professionally in the animation industry, all these Machine Learning/A.I tools are still a complete afterthought. I would argue it’s not because we would oppose a “make art button” but it’s the fact that file organization and accuracy are still key components towards making a cartoon. Maybe in the future we will get robots clever enough to assist us in our day to day activities that we could trust them to do things automatically without supervision.


#3

I think Machine learning will improve till it makes way less mistakes.
It’s a new gold, so the best minds are working on it.
Nothing is error-proof, every detail has a percentage of failures. We can’t guarantee that metal part won’t break on 120 MPH while we’re driving the interstate.
Same goes for ML. It might still have some errors, but if it reaches some critical point, it might not matter.
Currently, it’s way too crude, and makes such banal mistakes, people make fun of it, not comprehending it might be the worst enemy of their lifetime. Or savior, we will see.
I see you have a more practical experience with it, so my conclusions might be more theoretical.
I think they could combine different tools, not relying solely on ML for such tasks, as math, because why to make it, when you have calculations? why not combine those tools for better accuracy?

My thoughts exactly.
I, and I guess other humans considered my mental abilities the pinnacle of my existence and the purpose of it. I can’t be proud of my height, looks, or any physical attributes, as they’re given. And all automatic processes of my body, be it digestion, sleeping, or seeking for a partner are purely automatic biological. Nothing to be ashamed of, but nothing to be proud of also, as they’re just firmware scripts.
So we had the freedom of self-development, education and competition, the prospect of inventing and being creative.
It was the essence and definition of our existence, and the purpose for creative people. Something, which neither animals, nor machines could do. That was setting us apart, and closer to the creator.

Once I saw stable diffusion, I realized all our secrets, which were the intimate part for creatives, have become so easily reproduceable, they’ve lost their value. No-one will be intrigued, seeing an artist painting a picture, as “hey, a computer can make it much faster, and make hundreds of it in a second”. What’s the point, really? What’s the point for kids entering artistic colleges, knowing they can so easily be replaced by algorithms, and won’t make the buck?

But tha’t s in the log run. Right now some could use ML for their advantage, some refrain, and having artistic eye is still paramount. It’s those mediocre artists who will be weeded out, nd it happens for quite some time with global market, stocks, and now image generation.


#4

What I find jarring is how people mock artists, because they are being replaced. LIke if it can be automated, it’s worthless and some kind of a joke. Just shows how fragile our world is, with people jumping to dump anyone they can to save a buck.


#5

Regarding your car example, there’s actually two reasons why they differ from my fear of “A.I”.

  1. As Human Drivers, we can use our judgement to speed up or slow down when driving. There’s also a biological response. Whenever a human feels like they’re in danger, “fight or flight” kicks in hoping we do whatever it takes to survive.

  2. Humans designed those cars around math and science formulas. We would not expect a car to disintegrate under normal conditions unless it was poorly built or had some kind of flaw.

Now, with Machine Learning/A.I that all gets thrown out the window. While argument #2 could technically apply to robots (i.e they ARE a product of math and science) it also means they are subject to glitching out. And because these same robots have no internal organs that other humans can relate to, they’ll never understand “why” these glitches are bad or how to properly react to them.

So imagine your Robot Bus Driver experiences a “glitch” and starts speeding past 120 KM/H? It wont stop unless you force the robot to turn off.

In fact, there was a real example of this last year. Where a Tesla Car allegedly malfunctioned after it was accelerating too long. The result was killing 2 people.

You also raised another point that maybe Machine Learning could reach a critical point where perhaps the good outweights the bad? Well, if you ever play video games, you’ll know that even though technology has gotten a lot better, it also means they’re a lot more prone to breaking down. And it actually doesn’t take much to see how one bad line of code can translate into devastating (but also hilarious) results.

Again, I’m not afraid of robots being seen as tools. But I am upset that they’re still being marketed as “intelligent” when even the smallest robot error could threaten the lives of millions people. And it feels like the more society pushes these robots without trying to regulate them, we enter that point of no return where they will one day outnumber us and hold humanity hostage.


#6

These people are in for a rude awakening.

After they spammed the internet with thousands of their generated images, some of them are now trying to claim copyright status or make money off it.

But when the same technology makes it so easy to ripoff another style and spit out a thousand pictures, the idea of “ownership and value” is completely destroyed.

Why pay for your image of a generated dog, when I can just screenshot it, put it through the A.I Copy Machine, and boom, I now own a hundred variations of your dog picture?


#7

I guess you imply machine learning is inherently unstable and will always have some percentile of bugs due to its “diffuse” nature.
I see it struggles with precision, though some figures are astounding, like passing some exams with human or above scores, which were for Uni graduates.
I also thank you for clarification regarding mechanical parts, perhaps the testing of it has some great reliability margin.

I used GPT today as my assistant to resolve some logical questions, and it’s very capable at general questions.
It lacks specialized knowledge, but only because it doesn’t have access to such databases to learn from.

I think we might see specialized GPT assistants. It would make perfect sense. It could remind you your schedules, speak with you, remembering your routines, strengths and weaknesses.
Some academic institutions could create their own databases for everyone to benefit.


#8

I would take those exams results with a grain a salt.

ChatGPT answers questions after it was trained on the entire internet. That would be like if Students were also given textbooks to complete their exams (which would defeat the entire point).

Again, we’re not actually seeing any intelligence. The machines are really good pattern solvers, but that’s it.

See my examples again that involve life or death. Could we trust a robot to perform open heart surgery on its own? Or even solve the problem of Self-driving Cars without injuring pedestrians?

With ChatGPT or Image Art Generators, the machines are allowed to make mistakes that would never affect us in real life.


#9

It would be beneficial to develop firmware that is imperative and overrules machine learning in critical situations. This is essentially how we are built. While we have some freedom that allows for better adaptation, in critical situations our 1.0 firmware takes over, making instant decisions. Machine learning, by itself, is closer to consciousness, but an instinctual level is a necessary addendum.


#10

Evolutionary speaking, we do retrain past behaviors, but depending on the situation we are put in, there is still A lot more nuance involved before switching to primal.

This might be a morbid example, but I’m reminded of 9/11 and how the office workers had to choose between slowly burning to death or leaping out the window…

We just can’t mathematically solve dilemmas like that. Even worse, with a robot we would have to directly program those choices into it. But even the smallest amount of bias being introduced can potentially lead to these machines going on a rampage or harming us if it thought that was the “right” thing to do.

It gets even more scary when we think about how would a robot react to handling nuclear weapons? We had many close calls throughout history that still involved humans judging whether it was worth it to push the button or not. But since robots process their information in miliseconds, they could set off the nukes before anyone had time to speak to them.


#11

It may turn our much worse than you expect. If some countries continue pushing their agenda of “protecting” and expanding its borders, we might see not just hordes of GPS drones, but AI-driven ones.
If that sounds too far-fetched, look at Ukraine. Yesterday a peaceful country is bloodbathing for no particular reason. So unfortunately, it can happen once the jackals feel the weakness.
Some countries are already implementing AI for its military, whereas the US stands back for ethnical or whatever reasons. Guess who wins? It always the least principled one.
We were approaching digital medieval times (a lot or recurring medieval patterns), but now we might be approaching AI wars, which is the nightmare we wouldn’t see in our dreams. I really didn’t believe watching T1 and T2 it would become not so distant reality.