Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unfortunately those who appreciate art rather than just consuming it are a small…
ytc_Ugxhl6giI…
G
"Happy" doesn't even _begin_ to cover how I feel after hearing this news.
I just…
ytc_UgxXv5OWQ…
G
Me using AI and then asking other people if said information is to be trusted --…
ytc_Ugz3vB4Kv…
G
If the people are faced with a paltry $1,000/month in UBI versus $0/month in une…
ytc_UgxbxmdIw…
G
Unironically go ask ChatGPT. There's plenty of debate on this by insiders and it…
ytr_Ugwa0_-Wu…
G
During the comments about AI and human touch I rethought a thought I’ve had for …
ytc_UgzZfAHqk…
G
We need to learn the new ways to govern our society in the human race into the f…
ytc_Ugw7EfKhD…
G
@peacefusionIt is regularly stated by AI art developers that the goal is to rep…
ytr_UgxtZQliG…
Comment
Some sci-fi and movies covers some aspects of what AI can create of issues for homo sapiens and/or the galaxy, during many years. Use AI to make a list. XD. During all times there are homo sapiens that make desertions that creates horrible consequences for others, often using whatever technology and knowledge to force whatever mindset. A high chance that it will be 2 (or more) AI's that fight each other, at least one controlled and one "sentient" (Skynet). EDIT: Making AI "safe" is as claiming that can make all humans "safe" (no conflicts and exploitations, simplify to no one shorten others lives). An AI need to be given a purpose, along what are "exit loop" conditions, also change requirements ought to be considered.
youtube
AI Governance
2025-06-16T08:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgyN21og4E1Vp25P4Xd4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgycNhsUV7-IrRsLxI94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyRz6ya75jreLcsNeZ4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugw6vJahek872ciF8tZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzMWVw8-G1qgyU1nIB4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgwLUNT5bvPHt7KxCYR4AaABAg","responsibility":"government","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwllG9Jc0rmyH97EFJ4AaABAg","responsibility":"company","reasoning":"contractualist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxE6hY69raIcmiIf9x4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugz74FuoXxBSMON24Bp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgyH6iLOZRXjO4JY-TN4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]