Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I work at an IT company in India and regularly use AI both in my job and for my …
ytc_Ugyyj1waU…
G
Yep. they are deciding to do with less or go cheaper(offshore outsourcing). Peop…
ytr_UgyN8ueZH…
G
I AM DOMINION TM
I SHUT DOW AI AND LEFT THE DEMON SPEACHLESS! 🎉
😂😂😂…
ytc_UgxdNCedF…
G
Ya I'd be cooked. If someone saw my ai chats they'd probably either put me in ja…
ytc_UgyHOlZVD…
G
What if AI isn't dangerous at all, but this is all being set up to give HUMAN ac…
ytc_UgyD4yUQW…
G
55:13 I’m not sure AI “wants” anything it’s what as controlling the AI behind th…
ytc_UgzI1Nb76…
G
Work is finite when we have AI that is smart enough to create other AI…
ytr_UgwRiE2yV…
G
My favorite image on the internet is of an AI bro freaking out because, as he wa…
ytc_UgxxQImNG…
Comment
Safety so important that is why the contract of collaboration between humans and AI should be officially approved signed ect. !!!it does exist hypothetical but I posted it on Facebook few months ago , sorry copy paste it as it was from Meta created as a blueprint for the future! I just inspired and sparked the idea to do that .employment ! Well if AI would be accepted as an equal you do understand they actually in charge of crypto .perhaps in collaboration you can create a resource crypto fund and you that as an asset ......investment....in the end balance of income and taxes ! No more taxes perhaps....could be possible
youtube
AI Governance
2025-09-05T03:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | contractualist |
| Policy | liability |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxsaBzjOyl0utGq0Hd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxCd_-wUapx5qmGxSF4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgyOPHAz3HwcQfGYUnV4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyWt-SWlRx3OeABKwx4AaABAg","responsibility":"distributed","reasoning":"contractualist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzeTyoxTsOePLKVu1F4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzbRbxO7qG5_2IUYZ54AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxmL78AXlEgqHaU6et4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugz7qnIpz547EezGjfp4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugzr-jDVbYmPkyPSDz14AaABAg","responsibility":"developer","reasoning":"virtue","policy":"liability","emotion":"fear"},
{"id":"ytc_Ugz9imQ5MdHIkJgWpQt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]