Anonymous ID: 7eb81a Oct. 3, 2022, 4:27 a.m. No.17624112   🗄️.is 🔗kun   >>4125 >>4139

>>17623983

>See, your arguments are the arguments of a bot.

Can we please stop accusing each other of being bots? I don't think you are a bot. And I know I am not.

>You make even less sense now.

Or you fail to see the bigger picture?

>- How did these humans know to read these books?

They were taught.

I know what you're saying; It had to start somewhere.

Curiosity and creativity is fairly new to AI, but not impossible.

Emotions even harder. Since we don't even understand human emotions, how should we be able to simulate them in a foreign system?

But even still: https://www.theatlantic.com/technology/archive/2017/06/artificial-intelligence-develops-its-own-non-human-language/530436/

I'm not saying this article is the truth™, you know, Google and Faceberg. Just food for thought.

>So you admit that remote driven cars (taxis) are incredibly complicated to do even closely like a human taxi driver, who is cheap.

Nothing to admit, this is fact.

>Isn't it weird that this marketing PR bullshit gets pushed for 20 years now with nothing to show off?

Because the PR promises perfection. That is not possible. Why do I have to re-iterate this point?

>And why would someone not go for automating trains, which would be way less complicated?

Automated trains are fairly common. (pic related)

>Why also not automate retardo jobs like bankers, doctors, attorneys and other shit, who are just following strict protocol?

Because, those people have the means to be replaced.

>No, it's bullshit. It's marketing speak for computer ALGOs.

Algos are specialized.

AI = Algos. I never contested this point.

<Are you?

>Yes.

I guess I was wrong.

Weird that you use it as an insult towards me then…

>Computers do exactly what they were told and nothing else.

>If they don't, then it's probably because you made a mistake or a faulty CPU.

Proving my point again…

>If you program a computer to kill another human, you would be liable for murder, not the computer.

True, but that is because AI doesn't have individual rights (don't get me wrong, I would never argue that they should have), hence we don't recognize them as actors with agency.

>Because my father is dead, and I had no contact to my mother for 25 years now.

Does an AI need to have the programmer living beside them to follow their code?

>You aren't?

No.

>You have a technocratic world view, which is a retarded world view.

No I don't. And yes it is.

>Humans are not body parts either.

Humans are a collection of body parts, human body parts are a collection of human cells. Everything is made up of something.

>And humans are not the same.

I never said that they were.

>For a technocrat it's all just body parts, machines and every human is the same as any other human.

Good I'm not one then…

>

BECAUSE you fucking retard, a computer program that is used everywhere to drive shitty cars will make the same mistake everywhere and thus one single error has an insane amount of consequences.

Unless the learning algorithm can error correct based on an internal score. You sure you are a software developer? You sound like you have no idea how programming works…

>If you were a software developer, you would know this.

Lol

>

One nurse may take wrong medication once and that can have fatal consequences.

She'll never do that again. Kek.

>One pill machine will make the same mistake over and over until it's corrected

until it's corrected.

How are you going to correct a mistake you make if you are never made aware of it being a mistake?

Anonymous ID: 7eb81a Oct. 3, 2022, 4:43 a.m. No.17624136   🗄️.is 🔗kun   >>4145 >>4158

>>17624130

Yup… You failed to grasp the difference.

AI/MachineLearning takes a look at the available data and makes a lot of decisions that it then scores.

Higher scores moves on to the next permutation. Ending in something it deems to have the highest score.

 

To use your example:

Teaching the ai to say "Polly wants a cracker".

Would take a long time with an AI and you would start with something not even resembling words - until you got something close but not 100%.

This is the wrong usecase for AI and machine learning.

 

Where as automation is as you put it, hit record and play back.

Anonymous ID: 7eb81a Oct. 3, 2022, 5:09 a.m. No.17624198   🗄️.is 🔗kun

>>17624139

>By whom?

Someone who were taught it themselves.

>There is no creativity. It's just stupid algos.

So how does Stable Fusion create images.

It might not be creativity as you or I understand it, but it has to come from somewhere.

>It's at best the creativity of the developer, at worst it's just random nonsense.

Creativity can be a lot of things. For example the creativity to imagine complex system that themselves can simulate creativity.

>Everything can be art

Especially if you charge other people for it. Kek.

>so you can use random shit and call it art, which is pointless.

It is in the eyes of the beholder.

>Why?

>And for what?

Emotion is a crucial part in creativity. It is what makes us human.

You could say it's an expression of our soul.

Simulating that could advance AI 100 times over.

>The programs that I write are perfect

Alright, you are a programmer. Kek

>If you admit that it doesn't really work, why do it in the first place?

You have to start somewhere right? And define "doesn't really work". Are we back at, my AI isn't a perfect conciousness and therefore not an AI?

>It's not?

>Why is that?

Because that is not the intended goal of AI, then you would just use automation.

>You admit to failure and then follow up with "that's ok". it's not.

No, you error correct.

>They aren't.

Uhm… yes they are.

We've had one driving constantly here for over a decade now.

>And most goods are transported using trucks, not on train.

How does that defeat my point?

>It would actually make the most sense to replace these retards

I agree. They don't.

>Algos are not AI.

No, but AI is Algos.

>They have no agency of themselves.

Not the automatons that you keep calling AI.

If some AI actually does is another question entirely. I'm not gonna sit here and argue that we've created AI with agency (yet).

>You are not making sense.

Why does it matter if your dad is dead and you haven't spoken to your mom for 25 years, if they already programmed you? was my point.

>I hate my mother and I would and have never done what she wanted me to do.

Also, why I said parents AND caretakers.

>You do have a technocratic world view.

Keep telling me what I believe.

>Bullshitting

Alright, i've conceeded that you might actually be a programmer, but you have never worked with Machine Learning I can tell.

>Less code is actually better.

In general yes.

I don't know what I made more complicated?

Machine Learning algos are in general very complex. The more we can simplify them the better. But they serve a specific purpose as they are.

>Stop with the AI bullshit talk.

Wat? There needs to be 2 for tango my fren.

>Actually it happens

I know.

>but why does it happen?

Incompetence.

>Automate this part, so that there is one more tech retard and a few less nurses once again.

And this is a bad thing because, if the automation makes mistakes we can't error correct them? And you are telling me I don't makes sense.

I see no downsides to automating that, if the automation can reduce stress for the nurses and and still keep those in need medicated.

>That's why I'm against automating all this crap, and would instead get more nurses for a better result.

Yes. More stressed and undereducated nurses. I see no problems with that. Kek.

>People at hospitals will figure out that something is wrong, at least when patients die.

And then you error correct the machines.

Again, how is this different?

>>17624145

>Packages with barcode should in theory work 100%, but they don't.

So because there can be things that can't be solved solely with automation, automation is invalid?

If the barcode is prestine, it will work. Other factor play in. Some that can't be automated for.

But as you say, 95% correct at a higher speed might still be preferable.

Again, you fail to make a proper point.

>>17624158

>since a human would need to judge what scores value is.

Yes, at some point a human programmer has to set some rules for the AI to make that score.

You would probably call that God in the case of humans.