Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Why AI needs humans more that humans need AI… behind all human progression there…
ytc_UgyoUX2_M…
G
One aspect it's not discussed about this ai issue is the copyrights, ip constrai…
ytc_UgyyypPNm…
G
He thinks character ai is the freakiest app wait till he knows about chai 💀…
ytc_Ugz-DTS6L…
G
Is it conscious? Unlikely.
Can it lie? No.
Can it be programmed with deception? …
ytc_UgwVvxVSE…
G
“AI pioneer explains why it poses an existential risk for humanity”:
Summary:…
ytc_UgzJVVKbA…
G
Super intelligent means you need his thinking power to make decisions so it woul…
ytc_UgxA8DqJ0…
G
@IndexRed of course it's a mistake. That's what I was explaining to you. AI d…
ytr_Ugy6wbnI0…
G
We think of Lawyers as the ones we see in movies and TV shows defending criminal…
ytr_UgyDW3I_9…
Comment
In 2 years I will join college but what will I even study , or what can I study , what can I even study for when AI will do all the jobs humans did .
People say that AI will be good in the field of education but the jobs we are studying for don't exist in the future then there isn't any point in it , if I can't use the knowledge I know then what's the point in me spending hours and hours trying to attain that knowledge.
There are a lot of safety concerns with AI and it is a well known fact yet companies and governments aren't willing to take action even though they well know it can lead to our own extinction.
youtube
AI Governance
2025-10-07T18:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgydKu9yuwr6L3oGhAR4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzTVeNXca2gygEX9o94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugy0625gpRxwygf8_mF4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx3l_uHaYwIO1cmcCN4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugzda9HyzQ3bPuxVCGl4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyBzeN9JNkzvQfgHjl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzMAFRQeipukKwnfzB4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugx-kUrs0CLM_-N4zMF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxPDEY7-nXy9OLqoFF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy3wZpKggohAm8h-494AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"mixed"}
]