Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
5:45 Like, has this guy ever had the sensation that capitalism worked toward peo…
ytc_Ugwhs4khk…
G
Who I buy all this AI, when all the jobs have gone, who as the money to buy AI,…
ytc_UgyX9Wk0F…
G
Because you have the ability to engage in abstract thinking. Something a machine…
ytr_UgyztM8op…
G
the funny thing is, once i wrote to AI that it is only allowed to answer me when…
rdc_oi3utkm
G
Teaching something right and wrong is not the same as having an understanding ab…
ytc_Ugzv2qjm3…
G
Finally someone that doesnt just think of every single horror film ever made whe…
ytc_UgxsgFmJz…
G
If AI's goal is self preservation why would it kill off all humans? without huma…
ytc_UgyLBuYde…
G
just turn your router off... simple that's how easy it is.. AI can only exist on…
ytc_UgxKPrCyn…
Comment
As someone who gets paid to work with these LLMs, my concern is their accuracy. They FREQUENTLY present incorrect information as truth, even when provided with a short documentation. There has been almost no improvement for the 28 months ive been doing this.
The issue is, LLMs do not *know* anything. There is no single fact that they can repeat when asked 10,000 times with 100% accuracy.
youtube
Cross-Cultural
2024-03-21T06:2…
♥ 17
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgwG6l3vYhQxPiWerEJ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzQiq6t_wHUPM4_RbJ4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz-t0V8m7QK6TzbfOd4AaABAg","responsibility":"industry_self","reasoning":"consequentialist","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzNmxMu7suKlCIvwgF4AaABAg","responsibility":"none","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxFCr47DVClpbqez8Z4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgzZDyA-LxisO8Fkntl4AaABAg","responsibility":"unclear","reasoning":"deontological","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwDehY6nGeqjglEYCl4AaABAg","responsibility":"company","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzHDpRU8jqt2L_NJSV4AaABAg","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy7dC2bSlvqpJe3qct4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"resignation"},
{"id":"ytc_Ugzaq47x3OC-4SFsZz54AaABAg","responsibility":"government","reasoning":"contractualist","policy":"regulate","emotion":"approval"}
]