Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
If they made robots i would like them to look like Robby the robot. If it looks…
ytc_UgwrgCd_L…
G
The concept of protecting ones right to draw profit is the mistake.
Copyrights …
ytc_UgxV5DBAh…
G
Follow Indian ancient wisdom: Build our own family business; that the best way t…
ytc_UgyipNs4M…
G
@josephmartinez1267 If? I very much doubt they're are using AI to learn because…
ytr_Ugz7zvP9_…
G
Is par my knowledge I know to access AI network monotory to follow FGAP and FGAR…
ytc_Ugy4Oj8yR…
G
It seems that large language models (LLMs) are programed to be agreeable and apo…
ytc_UgzveBFP4…
G
Well... I was always a shity human (got into it by accident) so this AI is cover…
rdc_jigme1y
G
I hope it doesn't get out there. Ai in my mind takes jobs like in the movie a.i.…
ytc_UgxjZAvJY…
Comment
So basically as Elon analogized, humans building unrestrained AI is like a tiger building a stronger, smarter version of itself, for the purpose of letting it hunt for him. Even a person with half a brain can take a good guess how that's gonna ultimately work out for the tiger.
youtube
AI Governance
2023-04-18T04:0…
♥ 69
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugy4Lk4Hwb_t1Ynw8eh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxEDFr0EaBAw0M-BcR4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugw_Y-nGre8nh5ouNxt4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxzAQbVn9bOeHw5Z554AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_Ugz3PqkvW4b2D3Ox2WZ4AaABAg","responsibility":"developer","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugz1Ws-eE7fK_XJfnSt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugxc9USbP-7ab37vbet4AaABAg","responsibility":"company","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzqec7V6Elq_7Ok28J4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzSuHHXmuv4nZNzacp4AaABAg","responsibility":"none","reasoning":"virtue","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgxC3MegP7vcgiFlnpx4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"}
]