Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
As an artist and a daughter who is an artist appreciate this. I will say also as…
ytc_Ugww0N7Op…
G
This is one of the worst ideas humanity has ever had. This WILL kill people. My …
ytc_UgyYBbo-H…
G
Facebook became obvious AI garbage long ago. It lost all the feeling of "connect…
rdc_m5lbxe0
G
Perhaps these things are not practical enough to move that far away and require …
ytr_UgzYSDeNU…
G
Whatever, sounds made up. As far as I'm concerned A.I. is only a light bulb even…
ytc_UgxRnpjdX…
G
The aI is just taking the thoughts from other humans so the concept here alone i…
ytc_UgwbwNfAW…
G
It's interesting that you equate AI to Tesla. WayMo has significantly more compu…
ytc_UgyrehPxv…
G
Here is the reality of this… AI does have feeling not in the sense we do but act…
ytc_UgxReGt5s…
Comment
i dont think it will be a matter of AI just being smarter than a human. it will be a matter of the AI being more resourceful and significantly faster at processing than a human. an AI can look up, corroborate, understand, and execute a plan based on said information, all significantly before a human even thinks to reach for their phone with the intent of googling something.
Edit: and you dont even need a superai to do that. narrow ai can already do that. i think superai will really come into greater more widely applicable presesnce when it learns how to predict accurately future events, years, decades, centuries in advance. if a superai said that a meteor that will wipe out humans, and by extention AI, will hit the earth in 237 years (calculated to the millisecond) it can prepare and execute a plan to prevent this from happening. or at the very least prevent itself (and possibly a few safe human companions if we're lucky) from succumbing to this fate. im sure by the time something like that happens, off-world options are more realistic.
youtube
AI Governance
2025-10-03T11:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_Ugxg4ttJY8Cc5JNtJhx4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzjnT6mem9MZ_u2syp4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzyBk64dnJFPw4LLZd4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwrPMrVlapQ-jXZUbt4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwtbzpaZwIwAo2I0rV4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzjELIuVGDRxV5wCfp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugy0-3Tu2QoMqG5uYVl4AaABAg","responsibility":"distributed","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgzdOatNW347OsCtzGp4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzeIrAIXiuE8Xiaf0V4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugw4fuNqrqakZB3WtZd4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]