Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI can't be conscious. It runs on a program and depends on the information it is…
ytc_UgzVsUsDO…
G
AI is great in supporting productivity if employees know how to use it, know its…
ytc_UgxYeeN08…
G
AI is definitely a danger for human jobs like Executives and Backened staff who …
ytc_Ugx0TkdBa…
G
If we just set a computer program to generate 1920x1080 JPEGs listing every poss…
ytc_Ugzj3q0cI…
G
These tech bigshots warned us about AI until they got their cut. How is this a …
ytc_Ugx5KNSzB…
G
The Value Alignment problem is the holy grail.
Aligning AI with Human values, bu…
ytc_UgyjY2dXl…
G
A good session. Along with conveying what he wanted to say, he shared real life …
ytc_UgxGV4gCY…
G
Ai is a problem sure but I feel like it's just making climate change worse which…
ytc_UgxXjDmHU…
Comment
You go multi planetary with abundant resources use AI in a safe way like you said to get to that spot and try to protect our intelligence meaning us having control over the intelligence. Humans and robots working side by side some cyborgs on multiple planets every body job is to learn from AGI how it got that intelligent use its advances to start to evolve the human race. That’s why we are on this planet anyway learn evolve and reproduce can’t leave that one out reproduce humans not just robots. Star Trek begins my dude!!
youtube
AI Governance
2026-01-27T01:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxaAceo6wV1uJ3_HHh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyYjrEovQAXJmo5QU94AaABAg","responsibility":"none","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgyZLkhGT8hIu4UmgZ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwnAyM9tRViJVefsJl4AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgzZDTlE7ZSfv9iEvJR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxPh7kmEnh4qmzS5794AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwfBxjx0sW5kbskYZ14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugx7Ztqed2awfA3YZFx4AaABAg","responsibility":"developer","reasoning":"contractualist","policy":"liability","emotion":"mixed"},
{"id":"ytc_UgyK_N545LLmr1oVLmp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgwlZRZQFs3aR9MJpXN4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"regulate","emotion":"outrage"}
]