Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
But is it so much of a problem? Isnt that just already happening? But instead of…
ytc_UgwTBAiuC…
G
I've been saying this for the past 10-15 years.
We're trying to run the AI age o…
ytc_UgwHg9XHx…
G
In my mind after this video AI is kinda like nukes in the 50s. Humanity though n…
ytc_UgxBdApmy…
G
it just seems like its an slightly enhanced version of FSD? Which the new FSD i…
ytc_UgxrPZ91p…
G
If you don't know how AI works, you're not an engineer. A lion tamer, maybe, but…
ytc_Ugwg9ApgY…
G
Our current method of schooling is absolutely soul sucking. I’m all for trying s…
ytc_Ugxotz9dY…
G
beautiful algorithm, it serves you Italian food when you are at an Italian resta…
ytc_UgztJwI52…
G
The guy asked where are all these people going to go to work? That's the problem…
ytc_UgyKkn_4l…
Comment
Gotta love MSM fearmongering. We havent created AI yet, in fact we are nowhere near close to creating AI yet. Just because these companies keep branding everything AI - doesnt mean it exists. These are just Machine algorithms, its just computers being told - do X+Y=Z... this is why chatGPT lies constantly, because it gets confused. Honestly, its not Skynet yet. We may never be able to create anything close to a true AI.
youtube
AI Moral Status
2025-06-04T13:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugw3mHz-o22vHJ3eFSF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugzz82j6SD15gw6_LPZ4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwYLgrlgFb0QRIRzyF4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgxAXXUcLlpUdGe5Ufl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgwV0GVNF39uEM3qNA14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwqmjDykR-FnS5IUZp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgybiHe42yQGuz-BWlN4AaABAg","responsibility":"government","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxoK-mJgH_ojpXUeAB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugwn3-9Te2NI_IA7gqt4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgylxmOPrTcZFYyO3h94AaABAg","responsibility":"developer","reasoning":"virtue","policy":"unclear","emotion":"outrage"}
]