Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
What we're calling AI is just programming and pattern recognition that appears a…
ytc_UgwUvNNfg…
G
I really want to write a book but I'm not a native so I just let AI polish it, b…
ytc_UgwIXrHv8…
G
the first trurly aware AI could hide the fact that it is aware, and slowly evolv…
ytc_UgxNUacGL…
G
> AI for deadly force? No. Fucking. Way.
Yes, it's definitely smart to refra…
rdc_dwvpm97
G
Pretty sure in most versions Skynet went rogue explicitly because humans were ai…
rdc_l5w1yx9
G
It's very nuanced issue, i have generated videos, images and music using AI but …
ytc_Ugxa8szQb…
G
AI is dangerous in the wrong hands, im not going to go into itbin detail, but AI…
ytc_Ugxo9iH9t…
G
Everyone keeps asking if a AI will kill humans, why is that even a question to b…
ytc_Ugz1E468Q…
Comment
What the actual eff, will someone please explain to Dr Wolfram basic Intellidynamics and iterations thereof? And send him a copy of Kevin Kelley’s book “What Technology Wants”? If computational irreducibility does not easily help us grasp complex Intellidynamics, it may not help us at all with extreme ai alignment issues.
youtube
AI Governance
2025-03-24T07:3…
♥ 1
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | mixed |
| Policy | unclear |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5mhqb_lSeZWtQve54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzb2qOXs9QeyJVXFuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzci4azcKKznmKT4Pt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFada6LeqqUOef58R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMqi1HzvbCRAUHhZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhPjRIQCy-Y-ojdDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoNUAH6G4MvLVfH9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdY12QxCZKeylSuYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVmoWLn37HGGM2Cxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzGDG9HkSk1yB6j5I94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]