Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Hey @Controllerjeff-qf1rk, thanks for the funny comment! The robot's face is ind…
ytr_Ugwr43p8g…
G
jeez this is ancient in AI progress time... it was served by the youtube algo...…
ytc_UgxgH4Td7…
G
I was acting for being dumdum because my autism for sure, but you know why I'm f…
ytr_UgzDFXqXs…
G
@tokellwithit Most of the concepts in photography also apply to AI. If something…
ytr_UgxGc4rG0…
G
Ai would be a lot better if we made it illegal to monetize it. That would instan…
ytc_UgxTr_2Nd…
G
I love being informed and gaining a new perspective on things but I wish these t…
ytc_UgxAASRDf…
G
Fear mongering. Why does the government want us to live in fear so much. If it’s…
ytc_UgwwF4vh-…
G
A year ago when you guys were barely literate in AI, talking total nonsense, I m…
ytc_UgzRHvpWP…
Comment
I get it, it can be very dangerous if it has the power to do so. Spoke to so many people about the AI apocalypse but it doesn’t seem too realistic, google/microsoft had to use a decommissioned nuclear power plant to just power their AI, we don’t have the power capacity as yet to have an apocalypse. But have been told once AGI is available that’s when it gets scary, AGI is the next level of AI, when AI starts making decisions for you, Artificial General Intelligence, this is once we get over the hurdle of power sources for AI platforms, but there’s no solution to this power capacity as of yet.
My thoughts, it’s either we get unlimited power supply for them which would destroy the earth or we break something trying figure a solution and the world has no power and we are back in the old ages and AI would be non-existent then.
youtube
AI Governance
2025-09-08T21:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgxR9QDJCic9KigGCSB4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgxvFcJjgAjM6Uy_ysN4AaABAg","responsibility":"expert","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxTAlcjHimWeGWl5k54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxJ4nUOeMFxNRQotLZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugx7csjVBkRkGn-P-cN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugwu9CBxz5mGsDkaVEJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_Ugwj5JmXy4VstP_RzU14AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzrxVAOBfzGgb-x6cl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugwzt3FnZGJTL_d7maF4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxtXxNKUrT4KRyAtd54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"}
]