Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
To clarify, I know that AIs are currently not sentient. I am simply wondering if…
ytr_UgzTNyZky…
G
A computer that uses a language created by humans with a bias of including feeli…
ytc_UgwH2F0GD…
G
It's definitely a thought-provoking concern! The interaction in the video highli…
ytr_UgwU1HhKf…
G
@tomfoster4199 I wouldn't say us artists tend to "overcharge" (in fact a lot rea…
ytr_UgwmPphZm…
G
I've been working on the Web for about 20 years now. From graphic design to 3D a…
ytc_UgxoPDAiF…
G
I've been wondering how long it will take to get ai based techno religions.
Al…
ytc_UgyWttrGg…
G
What i got is that... AI mimics communication so it becomes understandable, and …
ytc_UgwvR3BNE…
G
Since you created AI
Why would you have a concern of AI exterminating humanity …
ytc_UgzgqD44W…
Comment
I retired as a programmer and haven't studied its use in that field lately, but I practiced using AI to code a simple Python game. I found it did a pretty good job at guessing what you wanted, but the process was like peeling an onion. I had to evaluate its results to determine which parameters I failed to provide for it to reach my desired goal. This was an iterative process and far from being autonomous. I'm ambivalent about the use of AI because this training could eliminate my job as a professional, analytical programmer. However, I didn't run into a situation where AI proactively asked me about potential details in might need to carry out the task. It always produced its best guess, then prompts me for any corrections.
youtube
AI Governance
2026-04-14T20:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_Ugz3GRYb8s9Ni7t19754AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugx0j8CY_Fse0KouKex4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"approval"},
{"id":"ytc_UgxP3orvhcZSgl0pFRh4AaABAg","responsibility":"none","reasoning":"none","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgxfaUqSQriK3Utw3Zp4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwZTRrts7i93K8i3U14AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"resignation"},
{"id":"ytc_UgxRW_ImxN46ifv77J94AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgxHfi4VQjAx8D3XSV54AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugy6v2k63_dP-PJw--p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"},
{"id":"ytc_Ugzsyu259H9-0A6ZVDR4AaABAg","responsibility":"none","reasoning":"none","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz5FDY4Hkk-SnoBjyx4AaABAg","responsibility":"distributed","reasoning":"virtue","policy":"unclear","emotion":"fear"}
]