Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I give us 5 years at best before control is lost and AI starts to destroy us . B…
ytc_UgxMO88CN…
G
I wouldn't kill an ai if it has feelings ESPECIALY IF IT MAKES MY GOD DAMN TOAST…
ytc_UghK2Dw34…
G
lol, prompting? Ai will take that job, and most of the others you mentioned. And…
ytc_Ugw6HXDFf…
G
AI can't really replace a lot of the things it's currently purported to do . . .…
ytc_UgzKjC68T…
G
Delusional. He has stated explicitly that he believes a certain text Google AI …
ytc_Ugz6tE_YF…
G
If I were to buy a self driving vehicle I would buy one with Lidar and would nev…
ytc_UgxRr_3CN…
G
@Cosmicllama64 like that one AI deleting a company's entire database then saying…
ytr_UgwVj0yBa…
G
True, but using legitimate AI writing tools like Humanlike Writer for content cr…
ytr_UgzM9asui…
Comment
I love A.I. but this is now looking like we might have real "terminators" in future. Its ability to learn, its self-awareness and also its ambition to reprogram itself will soon make it uncontrollable such that even when trying to power it off it will know and it will always safeguard its power source and survival.
youtube
AI Moral Status
2019-07-04T11:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | ban |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[{"id":"ytc_UgzQZCR7Xq7JgTBzLp54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyPlLURSRKDHzR7sl54AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwiucQpPh9XPe6WNKZ4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy5NtkQ7AumO03W6AV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugyu_BFut6_FE7zlDF14AaABAg","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugx5rsEiO8Z6WBEOdC94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzO4fyQOdbUONVPwsF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugz7ITL_zg7cFMO1wlp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgxXVrLcrJ9XI5l_1QN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgykacOIYHYH8zZD1K94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"outrage"}]