Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I like ai, but just for writing my strange stories and sometimes generate my fri…
ytc_UgwkN_zQw…
G
Just get AI do something cool together just do it and them keep doing that only …
ytc_UgwqfiE6f…
G
Judgment day will
Literally be an all out war against technology lol 😂ha ha ha 🤣…
ytc_UgyrFB645…
G
AI has been talked about for a really long time. I forgot who metioned the dange…
ytc_UgwZm6ib8…
G
As an AI Engineer who works mainly on Computer Vision both using Imagery and LiD…
ytc_Ugwedi7Fr…
G
Are you getting laid off because AI is replacing you? Or are you getting laid of…
ytc_Ugz9VFePQ…
G
Great interview and the point of strictly regulating AI development by any coun…
ytc_UgxeJFPMN…
G
Sam! SAM! SANS! AN AI JUST HIT THE PENTAGON!!! TURN ON THE TV! NO MATTER WHAT CH…
ytc_Ugw_-J5zX…
Comment
If you create an artificial intelligence robot that thinks like a human but lacks true empathy and emotions then you’ve in effect just created a highly intelligent, highly knowledgeable, strong sociopath.
And like any sociopath you really don’t want to be anywhere near them.
You’d have created something very dangerous indeed.
youtube
AI Governance
2025-07-28T16:1…
♥ 11
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgzUHbQnM62I5UpvjIN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugypk_9m1ym464xPR8p4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"liability","emotion":"indifference"},
{"id":"ytc_UgxCascvEqWuXP3WNtF4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyY49Mvi90aYe8cRD14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxGBJxNqSuHGSxmocp4AaABAg","responsibility":"government","reasoning":"virtue","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVLY2AtJ6XCnYPett4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy7jsN8dwyBm71_owx4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"ban","emotion":"indifference"},
{"id":"ytc_UgyhcpSX3wkpSgN4G-Z4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzyHXtbaL52okmEjTJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgwSAaG20xpShTwvk914AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]