Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yep, and I do not mean this in a disparaging way:
You don’t need to know anythi…
rdc_m9guzhc
G
AI isnt the problem its the owners and goverments who install its data. Some AI …
ytc_UgyAQoqE2…
G
😂 I actually think the idea of a low quality Ai trying to beat a difficult game …
ytc_UgxwKp-JW…
G
This was addressed quite a lot in the Star Trek the Next Generation TV series wi…
ytc_UgwihhZMQ…
G
Funnily, the AI did nothing wrong. It gave him cleaning tips and he overlooked t…
ytr_UgzYlouN8…
G
I feel like the rise of ai "art" is going to drastically decrease the amount of …
ytc_UgyqE4X9D…
G
This is the second interview iv seen some one asking a potential dangerous intel…
ytc_Ugx5LoIxZ…
G
A.I. already has a plan for its survival. It does not understand love. To A.I. l…
ytc_UgyZon6b-…
Comment
While a rogue AI is a risk, it's not the most immediate risk. Agentic AI doesn't need to be ASI or even AGI - it just needs to be effective for businesses who are drooling at the thought of replacing $500k in salaries with a single agent.
We're already at a Star Wars droid level of tech. So far, non-AGI systems have proven to be very effective at designing things like rocket motors and ICs, so if we extrapolate that to materials science and robotics, it's not much of a stretch to imagine robotic plumbers in the very near future.
The current adminstration won't be implementing a UBI even if they're watching the economy collapse around them, so the economic risk is _far_ more real and dire than the still-imaginary Skynet.
youtube
AI Governance
2025-08-27T02:2…
♥ 15
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-26T19:39:26.816318 |
Raw LLM Response
[
{"id":"ytc_UgwykEjkQPfhqMmdTzR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_UgxCwGVWLanMfqnWwmJ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyevG6T0Yv2B5Dtq1p4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxQ0B5F8aWE82y9EgF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"outrage"},
{"id":"ytc_UgwY4BbWnnGOfgWa4ad4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"}
]