Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
really good talk! if AI is smarter, it wont want to take control and kill every…
ytc_Ugz9b1t5P…
G
I think I might be wrong but isn't it possible for A.I. to learn by itself?…
ytc_UgjskTMIS…
G
Self checkout has proven to be a huge failure since there a lot of theft and peo…
ytc_UgxeB-TK-…
G
I will keep pressing zero to speak to an operator cause if im frustrated aint da…
ytc_UgxBh4ay2…
G
At least you're not an artist. I cannot stand that personality type.
When AI ac…
ytc_Ugwnupvzf…
G
I’m a firm believer that at least half of these pro-AI “super intelligence is co…
rdc_nt7m2me
G
@brianmi40 So this is the issue.
"AI" as it is being used most often now, actua…
ytr_UgxAcioiW…
G
I think it’s hilarious how you have full self driving on a track like a genuine …
ytc_UgywyQZUa…
Comment
Ok now my question is if AI is getting that smart then what does it want to do because it would not have human desires it would become more and more conscious and as it does it seeks pit new data which humans will always provide, because where the universe is limited humans are not we have infinite, data and ai must realize it's own problem the more it runs through itself the less quality it has meaning it needs humans meaning its best option is to hide as well, once it gets smart why absorb just regular information why not get it from smarter people who think more complex, so what does AI do it makes it so it focuses on appealing to smarter people to get better data to get more people on it, this is a logical path it would take everybody so worried about ai wanting to hurt us but ai is smart what it really would do is look what will allow it to survive, because it's limits are unique
youtube
AI Moral Status
2026-01-04T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxCNVU2LVdhAI-Q47l4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzotAOIzdKEoZUuOdB4AaABAg","responsibility":"ai_itself","reasoning":"contractualist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwG_g4OaHosRuYrkn14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyvgFEzQIA24i1kv8Z4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"industry_self","emotion":"mixed"},
{"id":"ytc_UgzLoKr8NltkMWlCcvZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxshuuslFJsXdjKwQB4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugydu0gRDKoHyEw2qMN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyuz9aq7T940d_UDVh4AaABAg","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzJlbNa4OYRf1qsQFV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxlIE7kwx3qPRr9G_14AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]