Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
EVERYTHING TO FIGHT FOR!! NATURE!! REPLACING MURDEROUS 'LEADERS' WITH HUMANITY P…
ytr_UgywXfkvB…
G
The ai doesn’t take inspiration. Have you ever heard of a rip-off? Because the d…
ytr_UgxGHk2Vm…
G
you don't need AI for this...
Idk about your country but each election here ther…
ytr_Ugy2v8W63…
G
After the Turnitins new update, most of these humanizers become useless because …
ytc_UgyaMrZWH…
G
If its a car that is driverless, then it would be whoever installed the system o…
ytc_UgjQ6Qxet…
G
I have a conspiracy theory that every AI bro is really just a bot trying to conv…
ytc_UgwnYjTkk…
G
I’m okay with my AI only acting like an encyclopedia. But that does sound fun 😂…
ytc_UgxZBU0mH…
G
Compute: Future currency ?
See how the AI buildings Grow.
'Compute Is the futu…
ytc_UgxwX_wtb…
Comment
He got shot because some people thought he was a a police informant because the police visited his house so much.
When it comes to anything more complicated than simple axioms, there is no "purely objective" information, the context in which it is presented is what gives it meaning. Machines simply cannot reliably spit out fully formed conclusions, because they do not think and do not understand language. You can make people believe anything with statistics if you use unclear metrics (the ai is a black box, we do not know how it works) and flawed data.
youtube
AI Bias
2022-12-18T18:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | deontological |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgyfZvoIV14abrPBp9N4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwj0lqoYueXAXGgChN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzWbS5WYyNY6h1NA4h4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwDGhC2y3EwF5waNZp4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzV0eREhCsE5qdNBIt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyUGCGvrQCk-FipBJF4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw11UmBHSeoDu2y7Qd4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxIkeoYXyG0tRH_Yr54AaABAg","responsibility":"developer","reasoning":"unclear","policy":"none","emotion":"fear"},
{"id":"ytc_Ugy_1g9NyJee_E2C-UV4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgxKlZZkPmt9nBwT--d4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]