Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I just watched another science channel that reported AI was being used in real l…
ytc_Ugy_ERDRq…
G
If artificial intelligence wipes us out, then what, who will it rule over, if th…
ytc_Ugz_k_Kur…
G
Yeah but if you limit a AI so much it is bound to make idiotic mistakes 😂…
ytc_Ugw_p6SIs…
G
I think the nearest prediction is the movie A.I. I remmeber watching that movie …
ytc_Ugi9TKCxi…
G
True and it's still painful to remove background from a pic to make a perfect pa…
rdc_n7yolgh
G
Not to offend anyone but do you think some of the issues we are seeing with AI i…
ytc_UgxN-eoii…
G
The shadow and geometry bits explain why some fakes still feel off. I see the sa…
ytc_UgxCMefjC…
G
@laurentiuvladutmanea the cameras should be illegal, they replace the original w…
ytr_UgyYZGqNr…
Comment
The reality is a key part of all HUMAN cognition is risk assessment. That type of risk assessment is not inherent to AI. AI has no preservation of humanity check on it's decision making. AI is also making so many decisions so quickly that it is possible that even if it was undertaking the same level of preservation of the human species check on it's decision making, that it could literally through sheer volume of decision making still make a fatal error.
By the way, the 1 in a billion risk analysis part of this podcast is something I disagree with. The risk doesn't compound by the YEAR. The risk compounds by the NUMBER of DECISIONS. If we exponentially grow AI each year the risk compounds with every decision being made. The risk compounds at essentially an uncontrolled rate. Even if each individual decision is relatively safe, the risk comes with the VOLUME of decisions. That should scare they hell out of everyone involved.
youtube
AI Governance
2025-12-05T17:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyYLqGdCiCDXwFe9XF4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_Ugz7fLFhx-ZTY30iDhN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxEM-v269J50Zg8nVJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwUs9VJolAwZ9JtCyZ4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwhwYa1mJw-YQBqaUd4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwZk7orM3w14Q2X7gh4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxgEyJfGAm80Q7GWsl4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzuVBF8JpP7Ae8bqKR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxSxefe9LeyYNEkVhZ4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDILA_4Ia8orIT_Tt4AaABAg","responsibility":"developer","reasoning":"deontological","policy":"liability","emotion":"outrage"}
]