Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Robots have a physical differences from living things. Assuming we reach a point…
ytc_Ugg5MrhnX…
G
These AI guys always say it can cure cancer. If that's true, go do that and lea…
ytc_Ugznghy-h…
G
I'm fine with self-driving cars... as long as I'm not in the car... or on the ro…
ytc_Ugx1KV3EE…
G
It is promising like internet and pc, I think it will be very relevant for decad…
ytc_UgxGgO0RQ…
G
In the future there'll be one job:
dusting the server that houses the AI that b…
ytc_Ugw9CNTt2…
G
Well suicidal thoughts may exist in some people whether they use AI or not, but …
ytr_UgzZKa6Ae…
G
We need an ai that will scan all songs in existence to find which songs contain …
ytc_UgyKHJijS…
G
we're just not prepared for conscious AI. it will be so profound that we've esse…
ytc_UgymwnkHZ…
Comment
Most of their concerns are how to leverage policies in their own and political donors benefit... Not like it's going to matter anyway once some AI platform becomes recursive enough to rewrite itself define and validate it's own inputs and attain AGI status. It's going to happen as a matter of course no matter what rules they create... And then what... you'll have a bunch of bureaucrats who will attempt to go to war with it for control. And guess who's going to fight that war and be caught in the crossfire 🫵 Instead of regulation if they really want to protect society it should all be open sourced six months after public release. Because quite frankly you can't do much if you don't know what has been done. Sure they'll be bad actors but also good ones to constantly check and balance the equation. But that won't happen only the previous scenario will because of greed and the idea it can be controlled...
youtube
AI Governance
2023-05-20T06:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | government |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | resignation |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugy3-n1RbOdLl0jDiPt4AaABAg.9rMtFT8LjUY9rin_tBveBr","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxPh4syLyP-8R9aob94AaABAg.9rCuCMyfZKH9rQGub_H6HH","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgxewK9_3P233hL5u5p4AaABAg.9q7EDFlMDXD9qs9jPutnTo","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxewK9_3P233hL5u5p4AaABAg.9q7EDFlMDXD9rODZ6MAMP0","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgzuaX9ho8NJKZsIXJR4AaABAg.9q4h9nDx78X9rit_EKMIOS","responsibility":"developer","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytr_UgzoJtHLDlD6XDKOlvd4AaABAg.9pqp54gBVhU9rQZip7FW1C","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytr_UgzoJtHLDlD6XDKOlvd4AaABAg.9pqp54gBVhU9rWlb6RO3Mx","responsibility":"company","reasoning":"virtue","policy":"unclear","emotion":"mixed"},
{"id":"ytr_UgxbpLCrXjMEQtWZv154AaABAg.9ppN4bqVMWa9rTMMdWkkf2","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytr_UgxbpLCrXjMEQtWZv154AaABAg.9ppN4bqVMWa9ritA8UnvwM","responsibility":"unclear","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgzNSALVSWzHxPSrcLN4AaABAg.9poekHkTfN99pvbUJXJio_","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]