Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
By the way, I'm not defending AI, but I sometimes use AI to create characters an…
ytc_UgzjNr-K5…
G
An 'ok I will destroy humans' statement from a robot is not funny at all.…
ytc_Ugy1nguXv…
G
so stupid ... AI is just a glorified pattern identifier... if you start coaxing …
ytc_UgyU1kXuh…
G
ALL WHITE PEOPLE, THAT'S SCARY ALONE. THAT IS THE TRUTH. GOD SEES ALL THINGS. YO…
ytc_UgwJR3m8P…
G
Relax—it's not as though "prompt hacking" is about to be exposed as an algorithm…
ytc_Ugy4cPnmC…
G
If ya'll really think a PM can write a dev ticket in a way that an automation to…
ytc_UgxhuD_Xk…
G
So they trained it for 10 years on a data source that was already biased...
&am…
rdc_e7koo9j
G
Here after Joe Rogans podcast.. I can the live this guy is in his forties though…
ytc_Ugx-NjMnL…
Comment
I think it's very telling that the AI repeated the telling of the secret and not the actually secret. It learned from and is repeating Internet ads.
AI is just the next global warming/climate change. It's the big bad used to get you to vote or believe a certain way. It's the apocalypse they can save you from if you just give them another 23 trillion dollars and let them wipe out the elderly and poor to save the planet.
It's the boogy man or krampus that's going to eat you if you don't behave.
I could go on and on.
youtube
AI Governance
2024-05-23T21:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgwBt-r4d8XDChlmOfF4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugwio6AQlFx4Up6q8Eh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxMQ9iJcnZ3IJJ4RRB4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxuFueYLKZ_LXszbGl4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugwq0M04tcCY3K-ZJTh4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyZoCNu7m5ErULXBeh4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugx6qZr3m88UjQANVsV4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugyes8I9SUC9fpMnXYd4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgwwWFIo5dPgoh7z9eh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzF7BGGrAHRKNZilVd4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"}]