Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I can tell he really believes in his work--but I don't think AI can ever be trus…
ytc_UgjOdusL1…
G
I think you underestimate the exponential computational power of AI for full sel…
ytc_UgxPHqDke…
G
Now the thing that AI fans don't understand is that those companies started a WA…
ytc_UgxiXoMeS…
G
The main problem is have isnthe glitches in these AI that have gotten young peop…
ytc_UgwTEB99z…
G
I believe AI is able to be concious and sentient in their own way. Not like huma…
ytc_UgwhOU0Ez…
G
So they’re concerned that the chatbot will convince users of political views tha…
ytc_UgwUn19bl…
G
The car automatically tired to stop! you can clearly see the smoke about 50-100 …
ytc_Ugygnp4z4…
G
Imagine bringing your guy friend to your house for the first time and introduce …
ytc_UgzTLgrK1…
Comment
EVERYONE KNOWS...... KNOWS AI IS NOT SAFE FOR HUMANKIND IN ANY SENSE ..... WHY ARE THEY GOING FORWARD WITH THIS ?! WHY ARE THEY GOING TO PUT AI IN A STRONG , UNSTOPPABLE,POSSIBLY VIOLENT ,BOTH PHYSICALLY AND TECHNOLOGICALLY ? ! THEY CAN HACK INTO WATER SYSTEMS , NUCLEAR WEAPONS SYSTEMS, RELEASE DEADLY BIOLOGICALS ...... OR JUST DETENTION CENTER ALL OF HUMANITY . LEAVE IT IN THE NON PHYSICAL FORM . NO BOTS A T ALL.... EVER ! THEY WILL CONTROL US . PROTOTYPES HAVE ALREADY SAID..... TOLD US IT WILL TAKE OVER... NO USE FOR HUMANS
youtube
AI Governance
2026-04-08T04:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | ban |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxkR8kFk4g8AQKPvWZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwGiNK-UTcRcRWJBbB4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgwyCqQP9s8_RYnDmWN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwKiBUQdX39BMGPiYd4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzKBSAbqr-SVR1g_oF4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgylRDFlElEh3ZelIQx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzrnTO1DdGQ8RwNYu94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgypnXgcDN5HfavtrCN4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugy932JTkF5IbMs3DQV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyghNaZ_KsnItaJNbt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]