Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
chatgpt is helpful, but if you want your assignments to sound more natural and h…
ytc_UgwNo9WZT…
G
@AlexAlex-wm7xt and tremendous down side listen to the Geoffrey Hinton noble pea…
ytr_Ugzp73m5z…
G
You mainly describe Geoffrey Hintons AI sub goal dystopia. Fair point. There goe…
ytr_UgzUrlFSr…
G
*The sooner humanity wakes up to the power of AI the better it will be for human…
ytc_UgwaiNafU…
G
They are blowing smoke up the billionaires azz . Most of their shit is backfirin…
ytc_UgwDgDjkh…
G
Missy Cummings is useless. Accidents on autopilot when there is a driver in the …
ytc_UgzurZP0d…
G
As always Amanpour , making the worst of the worst .
This talking is so nonsense…
ytc_UgyZfCZ2-…
G
Companies don’t care if the code quality isn’t good; they just care if it’s good…
ytc_Ugze1kgvI…
Comment
Interesting to see this come up, just reread through the AI of the Precipice this past week and was thinking about how much has changed in such a short span of time.
Specifically what the short and long-term impacts of LLMs might be. Personally I'm doubtful LLMs will lead to AGI, but think it's clear they already pose a less existential but still massive threat to our structures of trust and truth.
youtube
AI Governance
2025-12-01T03:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_UgxlGDu27YIsknBEVtd4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugx6Az7cOZzJ84be6D14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugx-XKN2OsFZQBrxpOx4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugy09kKGeTqHHG_xsCZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgwuXgNYciWPWAryw_J4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwBb81abvv8BtVW1AR4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxXqs2lTbUi5cqxH154AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgyR1ppDcdcDLctCvEF4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_Ugy_YgphAbNVVxzcQC94AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_UgxfKZF6XId1qc5RShZ4AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"mixed"}
]