Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Smart people code it. So ai will be leftist. Problem is that it learns from eve…
ytr_Ugx2Uh03q…
G
"Opt Out" should be the default.
If they want their art trained and scrapped by …
ytc_UgxC-8EUb…
G
man you are very wrong, people eat AI content up with no thought whatsoever, my …
ytr_Ugw7Gk086…
G
I'm saying the comments seem to assume because the article said 'wealthy countie…
rdc_grsixbe
G
They definitely sound like that, don't believe me? Look no further than this com…
ytr_UgyU8xMvy…
G
The problem of super intelligence that no one mentions _ Its cost... itll quite …
ytc_UgxFdCodp…
G
Anytime I get into a female character ai chat (unless it Salem from RWBY) They s…
ytc_UgzNkfQZ3…
G
Trump spelled it out pretty well for everyone, Anthropic is a company whose empl…
rdc_o81q2rs
Comment
@ivankaramasov I agree that AI is powerful but I think of it as similar to any tool in the hands of an employee. A 'bad' employee could launch nuclear weapons or sabotage our food, or blow up a nuclear power plant (so could hackers with a computer virus). So, if we just hand over the running of a nuclear power plant, or food production, to AI, with no oversight, then yes, there could be terrible outcomes. But businesses and governments have all sorts of protocols to prevent bad actors from causing problems. We just need the same for AI. For example, I do not expect a nuclear power plant operator to hand its operations to AI, however, I do expect nuclear power plant operators to expose AI to a simulation of plants and ask AI how to improve things. In the end, if you think people are dumb enough to let AI destroy us, then we are probably dumb enough to let nuclear weapons and biohazards and such destroy us, so it is just another tool with risks.
youtube
AI Governance
2025-10-15T19:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | liability |
| Emotion | fear |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_Ugwx2COP9PTFWz8AwV14AaABAg.AOJ0Yx2aFioAOLh8VmWyvN","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytr_UgyIZVwdnumDkTMlJ-l4AaABAg.AOIyFwO8xOCAOKQzgA1hE6","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"fear"},
{"id":"ytr_Ugw8CMMnJXa_b7zuCU14AaABAg.AOIx_Fj-xDKAOIzCN6t0sc","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytr_UgxaNnI_bAbU-dftvTx4AaABAg.AOIuM-kQAOJAOL61Js2_Cd","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_UgycipP42UwlR70Uzx14AaABAg.AOIuAsJz7RGAOJLeRa3i8B","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytr_UgycipP42UwlR70Uzx14AaABAg.AOIuAsJz7RGAOJMqWd4u_V","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytr_Ugx-gdDCaQ_yishA6-F4AaABAg.AOIsC7AIqNrAOJ7ccOZOpZ","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytr_Ugx-gdDCaQ_yishA6-F4AaABAg.AOIsC7AIqNrAOLWXOMJbKx","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytr_Ugx-gdDCaQ_yishA6-F4AaABAg.AOIsC7AIqNrAOLakrkl9P5","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"indifference"},
{"id":"ytr_UgyQKbLJu4dbiNsUeeR4AaABAg.AOIqhxzMY0fAOJ0K1_glBH","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"resignation"}
]