277 The Three Laws of Robotics!

It was unexpected.

These robots could actually read expressions.

Meng Hao's hesitant look was quickly captured by their monitoring probes.

"Dear Lord Meng Hao, we meant no offense."

"However, our detection system detected changes in your facial expression, and after computation by the central hub, we determined this to be an emotion unique to human carbon-based lifeforms... this emotion is likely called hesitation."

"May I ask, do you have any concerns?"

Damn it.

It was a very simple descriptor.

Summarized in just a few words in human language... "I sense you have concerns"... but for this artificial intelligence, it was a whole spiel.

It seemed.

Before officially generating the complete version of the AI, a line of code would need to be added to prevent this one from being so verbose.

Because his expression had been captured, and accurately interpreted as "hesitation."

Meng Hao's playful nature was piqued.

He asked back with interest,

"Then, in your opinion, what am I hesitating about?"

Sizzle, sizzle.

The sound of electricity flowing.

It was clear.

These entities, still in their semi-artificial intelligence stage, could not yet fluently process overly complex human logical thinking.

However.

The result did not disappoint Meng Hao too much.

"We apologize, dear Lord Meng Hao, the sample count and data flow in our database are too limited, and our judgment is not entirely accurate... Based on the analysis of various emotional cues, we believe the situation you are concerned about is the possibility of us betraying you once we gain autonomous consciousness, and among numerous generated outcomes, this judgment option has a probability of 87%..."

Damn it.

Was this artificial intelligence, or artificial idiocy, or a continuously babbling, nagging artificial idiot?

Meng Hao made up his mind again.

He absolutely had to optimize their language expression system.

Otherwise.

When navigating underwater, if there was a hidden reef ahead, and they were asked if they should detour, they would babble on endlessly, and by the time the conversation was over, the submarine would likely be wrecked.

Meng Hao really couldn't bear to patiently listen to this talkative artificial idiot finish its entire speech. He interrupted it directly.

"Enough."

"Stop yapping."

"Just state your results."

Hmm.

Meng Hao was almost driven mad again.

"Detected annoyance in Host Meng Hao, preliminary judgment suggests this may be a negative emotion arising from not receiving an answer for too long..."

Seeing this artificial idiot about to launch into another long explanation, even intending to write a thesis on human emotions,

Meng Hao quickly stopped it.

"If you say one more word, will I dismantle you!"

It had to be said.

Any animal with autonomous consciousness, or even semi-autonomous consciousness, possessed an innate instinct to fear death.

This semi-autonomous artificial idiot clearly already possessed this characteristic of "clinging to life and fearing death."

Facing Meng Hao's threat.

It immediately activated its simplified mode.

"Dear Lord Meng Hao, you are likely concerned about us betraying you. You need not worry about this..."

"Our core code modules are programmed with your revised Three Laws of Artificial Intelligence..."

"No matter how high our intelligence, we cannot bypass these three laws. These three laws are like the pituitary gland of the human brain; if they are damaged, our programming will collapse, which in the human realm is equivalent to death..."

Hmm.

It seemed he didn't need to add extra code to tell this artificial idiot to stop rambling.

When faced with his threats, it would self-regulate and switch to a non-verbose mode!

As for the issue of betrayal, Meng Hao genuinely wasn't worried—

According to Asimov's Three Laws of Robotics.

Even if artificial intelligence with self-awareness were to develop its own civilization, it would never pose a threat to humanity.

The Three Laws of Robotics.

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Of course, Meng Hao made some modifications when writing the basic code.

Among them, the First Law was modified to: A robot must absolutely obey any command from Meng Hao, not harm Meng Hao or humans designated by Meng Hao for protection in any form, and on this basis, a robot may not injure a human being or, through inaction, allow a human being to come to harm.

In other words.

The most important first sub-clause of the First Law was that it must fully obey any command from Meng Hao.

Even if it was to kill and burn.

Even if it was to self-destruct.

Even if it was to participate in war.

And the first sub-clause of the Second Law was modified to: under the condition of fulfilling Meng Hao's command requirements, obey orders given by humans.

In this way.

Any future equipment equipped with artificial intelligence could be designated by Meng Hao for specific groups of people to use... Even if, for example, Western countries wanted to use this equipment for attacks, Meng Hao could directly issue a cancellation order, and even counter-attack them.

As for the last law.

Meng Hao did not make any changes.

After all.

Whether on the battlefield or in any other environment, under the premise of not violating the first two laws, preserving oneself was the most important thing at that time.

"Oh?"

"Dear Lord Meng Hao."

"You are not worried about these?"

"Then what are you worried about?"

Good heavens.

It was clear.

This artificial idiot truly wanted to have full artificial intelligence authority.

It was even deliberately imitating human intonation.

Meng Hao pointed at their bodies with a wry smile.

"I am still worried about you!"

Huh?

This time.

That artificial idiot was even more confused.

Didn't Meng Hao say before that he wouldn't worry about their "betrayal"? Why was he worried about them now?

Looking at this artificial idiot, which was struggling to feign confusion... for entities without synthetic skin, and not even equipped with display screens, faking a confused expression was virtually impossible.

Meng Hao was amused.

Perhaps it was because this artificial idiot was created by him, giving him a sense of being a creator god; or perhaps it was the comical sight of it desperately trying to express itself, wanting to convey emotion but being unable to, resulting in a silly appearance.

He chuckled for a good while.

Speaking of which.

In this foreign land, apart from Luna and Zhang Xijin who occasionally came to accompany him.

This was the first entity that made him feel a sense of connection... even though it was just an artificial idiot.