Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robert Miles AI Safety videos are a must watch if you're interested in this topi…
ytc_UgxSG3bIG…
G
Hard line Anti Here....
How do you find the groups?
That are trying to
Do somet…
ytc_UgyX93TpS…
G
The guy just said for a $20 subscription you can get an AI Bot to do the job. My…
ytc_UgwaOz6Dq…
G
This Sophi robot scam is getting out of control... Now main stream media is repo…
ytc_UgyNjE1H9…
G
Without humans, AI will have no source of information. This is the topic that th…
ytc_Ugz6-M8Z5…
G
From Evolutionary perspective AI is constantly becoming 100 times more intellige…
ytc_UgwRGSoIQ…
G
I feel like I'm watching a conspiracy reel. .... if it's doing other more compli…
ytc_Ugy-YPCOC…
G
Part of me thinks the Covid-19 lockdowns were a test run for AI. These AI compan…
ytc_UgwhaSrI1…
Comment
It's the end goal we set for ourselves that if accomplished proves the rules correct.
Some AI are already designed to ask themselves good questions, try to solve them and if they're unable to do it, ask even better questions.
They do it in a loop at billions of times the speed of a human brain, even if it's not conscious. Surely it's some form of intelligence.
And given enough time, maybe with quantum computing, it'll be indistinguishable from an intelligence. It'll walk among us and we couldn't tell the difference between a human and a robot.
Because it has asked itself so many questions and found the right answers that it'll effectively be a godlike mind in a machine.
youtube
AI Moral Status
2025-05-16T17:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugzthwna2IS3FD6X-J54AaABAg","responsibility":"developer","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytc_Ugw5GTAGzRVPZ0H5aN54AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytc_UgxrptvKqDVlLwN7NWR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgzY5ABc24FjIFoTM3x4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzBad-NGvWgds83o7Z4AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgxFE72xe26qnFK_jx14AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzdn5ItP0YgfcN8hXl4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwVXS69ZTyif65Q2Xt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugwm0WtLGRfFC7Gfv2h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxqlW7uU8ExXeYtV5d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"}
]