Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Is that book legit I have saw others post about it but said AI was just posting …
ytr_Ugzg4TENP…
G
In no way am I trying to be "smarter" than this guy, but... We've been hearing t…
ytc_Ugx4W-2hM…
G
I think the a.i. are much more like humans that anyone seems to want to think…
ytc_UgziQvlqc…
G
So much automation and technology and yet we still all work 8 hour days 5 days a…
ytc_UgzWwJrll…
G
It's a pity the film makers didn't make this properly. Such devices (yes, avail…
ytc_UgzrdJ-Ql…
G
How bout a large language model that only praises you for how great your videos …
ytc_UgwS_XfHS…
G
@robertd9850 No, it's called having experience! I tried searching for a definit…
ytr_Ugx8-2Fsh…
G
You know this thing businesses do where they "negotiate with their employees on …
ytc_UgztfwQ7I…
Comment
It's simple. The government makes a legislation according to which ALL robots with artificial intelligence must be programmed with a special code that prevents them from becoming conscious. I'm sure we can create a robot that pretends to be conscious but it actually isn't.
It's probably not that simple cause I'm not an AI engineer or a politician but I think if we worked on this basic idea we could regulate it well. Correct me if I'm wrong. It's a very interesting topic for discussions.
youtube
AI Moral Status
2017-02-23T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgjrmoGYAAEue3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugi1Pt8MqFieYngCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugg_s6KlAogU3XgCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugi9VBjs69mr6ngCoAEC","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UghgyO9SEyAWYngCoAEC","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgjWIUkzgDDkGXgCoAEC","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugho31qILwfle3gCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgiVEECnLWqvWHgCoAEC","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgjLsMyw0B8wCHgCoAEC","responsibility":"distributed","reasoning":"mixed","policy":"ban","emotion":"fear"},
{"id":"ytc_UggpPbJsPQwWkngCoAEC","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]