Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Wyt folk and I’m gon say wyt folk cause a ninja would never…. They so thirst to …
ytc_UgyMLhy1T…
G
@neverneverland5836 that's true for how ai is currently made, but there are peop…
ytr_UgwQjt4Ub…
G
Hey there! It seems like you're drawing a parallel with the themes of Isaac Asim…
ytr_Ugy3mcERe…
G
I have never and will never use Gen Ai in my art (Nor do I use ChatGPT or anythi…
ytc_UgzBnZag5…
G
Or we could just not give robots the ability to program themselves and just stic…
ytc_UgzcGhOAR…
G
Im thinking so myself
Ongoing File Delivery Failures — PNGs and ZIPs Broken, Ev…
rdc_n7k96hv
G
AI is pure trainable intelligence
You cannot blame the AI for being right
Facts …
ytc_UgxBDVskm…
G
@witerunguard1737 who cares if artists are “whiny”? our jobs are being taken by …
ytr_UgzbIEKTQ…
Comment
A lot of people like to speculate about what bad things future AI systems might do, but most of it is quite ad hoc reasoning. There is actually a whole field of AI safety research which studies this rigorously, the channel https://www.youtube.com/@RobertMilesAI is a good introduction to this. What you can learn from that channel is:
- How AI systems develop goals that weren't explicitly in the training setup
- How AI systems benefit from being deceptive
- How hard alignment with human goals actually is
- How AI systems do not have to be vastly more intelligent than humans along many axes to be dangerous
- How the actions of advanced AGI (arteficial general intelligence) will be hard to understand and impossible to predict
youtube
AI Governance
2025-08-26T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | mixed |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgzHNH3jgnuUeiqntzt4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"fear"},
{"id":"ytc_Ugyj7cE_vmx_PkA1LDp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz_BaFM712eIP6j1qR4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz4MGHq1c8H81JXzkF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugykvmnr8uk-u8yFMqV4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"}
]