Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
It's simple, the economics do not make sense for OpanAI. Google, Amazon & M…
rdc_nsf1ka0
G
Does anyone else feel AI is hyped? I mean, it can destroy jobs, but that's what …
ytc_Ugwf4s7pP…
G
Insane take: Output does not match skill and pumping out AI code is not widening…
ytc_UgwBXkk3Q…
G
Yeah, anyone saying AI won't take their job with 100 percent certainty is lying …
ytr_UgwOKPw8q…
G
I don’t think they realize that instead of using AI they can work with other art…
ytc_Ugz1gQvYV…
G
It all depends on what humans allow AI to do. If they allow it to take over, it …
ytc_UgzHJ4042…
G
If all this was not so sick it would be funny:
Yes,the wealth-class elites are …
ytc_UgwPvXIXq…
G
Let me give you my hypothesis we're f***** thank you humans for being so lazy to…
ytc_Ugw9Ga-ny…
Comment
Just based on introductory remarks, Wolfram uses nature and his life-long experience of computational intelligence to console himself that AGI poses no threat that humans cannot problem-solve. However, neither nature nor pre-AI technology include the power to 1) develop an autonomous agency; and 2) deceive humans as to agentic intentionality.
youtube
AI Governance
2025-01-31T00:3…
♥ 6
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | developer |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugx5mhqb_lSeZWtQve54AaABAg","responsibility":"unclear","reasoning":"mixed","policy":"unclear","emotion":"outrage"},
{"id":"ytc_Ugzb2qOXs9QeyJVXFuB4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzci4azcKKznmKT4Pt4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgyFada6LeqqUOef58R4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgyMqi1HzvbCRAUHhZF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzhPjRIQCy-Y-ojdDN4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgzoNUAH6G4MvLVfH9V4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgwdY12QxCZKeylSuYV4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgzVmoWLn37HGGM2Cxh4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"resignation"},
{"id":"ytc_UgzGDG9HkSk1yB6j5I94AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"approval"}
]