Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes. Iirc a high ranking insider confirmed they literally opted to make it worse…
rdc_oht77p4
G
please do not use generative AI for any reason.
it is inherently unethical and…
ytc_Ugx7wU75R…
G
While I hate generative AI and totally agree, I personally don't agree with one …
ytr_UgwoJftNh…
G
Humanity will have to go to war with these same robots.. A.I. will become self a…
ytc_Ugy5ndku0…
G
Self aware AI is one of the biggest fake outs of the century. We are at least 50…
ytc_UgzujTcMO…
G
AI is just a continuation of what was already happening: outsourcing to foreign …
ytc_UgwMTdtt-…
G
When you boss asks you to use AI to make your job more productive and easier, he…
ytc_Ugw-IwX6y…
G
It’s absolutely insane no one wants to pump the breaks now, Gemini 3 Pro is bett…
ytc_UgwzRvaog…
Comment
*sighs* I know these channels...reporters...programmers just want attention,or fake validation but let's pull the oz curtain back a sec. First anyone saying anything us real "a.i." does too many drugs. Glorified and hyper limited chatbots are programmed response tools. As for this being a "robot" ok but only in a barely uncanny valley basic bitch version. The spoiler of it is there's about a 50/50 shot this is remote controlled and preprogrammed responses.
youtube
AI Responsibility
2024-07-01T14:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYnShrMH0ZKfVKH7x4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwLL6uOZiG3bRUksdR4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwGBOXkMY8XkLqqpxp4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxO1GBnF9IzXNAjqa14AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwzZtGFWIif2H986JR4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyZR4n2d43pHTZrPqh4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwRIgn7SGt8iAz77FR4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"industry_self","emotion":"outrage"},
{"id":"ytc_UgwLqbW7ie1gYSDRzix4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"none","emotion":"disapproval"},
{"id":"ytc_UgylvaYh6xhU8rMH9nx4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwaZg32o54YbOHOSl54AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]