Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The fact experts say you need lidar to ever become full self driving, yet t elsa…
ytc_UgwNz80Q4…
G
You’re going to get downvoted but you’re correct. This sub is people who can’t g…
rdc_o4hlp7n
G
This is the best tl;dr I could make, [original](https://www.commondreams.org/new…
rdc_eh3wzqo
G
Remember Data and Lore from Star Trek TNG?
Maybe we got it backwards🤔
Maybe by …
ytc_Ugwy-x9K6…
G
Perhaps more jobs? I don’t think it will replace all jobs but how will it create…
ytc_Ugw8p-mJV…
G
Points are 1. Cameras are not good enough for autonomous driving. 2. Accident…
ytc_UgyXM95yD…
G
Yes and no about the AI thing, the "AI" we have right now are essentially big YE…
ytc_Ugz9daB98…
G
Is this voiced by your your own voice ran through an AI? It sounds... slightly o…
ytc_Ugxotep83…
Comment
"The company said it would now address those risks through its terms of service, restricting the use of its AI models in political campaigns and lobbying, and monitoring how people are using the models once they are released for signs of violations.
OpenAI also said it would consider releasing AI models that it judged to be “high risk” as long as it has taken appropriate steps to reduce those dangers—and would even consider releasing a model that presented what it called “critical risk” if a rival AI lab had already released a similar model. Previously, OpenAI had said it would not release any AI model that presented more than a “medium risk.”
reddit
AI Governance
1745167927.0
♥ 48
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | company |
| Reasoning | deontological |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-25T08:33:43.502452 |
Raw LLM Response
[
{"id":"rdc_mo4be3s","responsibility":"government","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_mo7nnz8","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"rdc_mo4fsmk","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"rdc_mo5aiqq","responsibility":"user","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"rdc_mo46j35","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"}
]