Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I miss when people were accused of tracing and “copying someone’s art style” ins…
ytc_Ugz-dTwtg…
G
on giving robots negative "stimulus" to coerce them to work for economic profit:…
ytc_UggFuDC5x…
G
If anything has made philosophy science rather than a liberal art, it is machine…
ytc_Ugxu0SgXL…
G
Well, the matter of “drives,” underlying “behaviors,” is making two assumptions …
ytc_UgximKBdn…
G
You can't convince an LLM of anything. You just guide it it's algorithm model. …
ytc_Ugw9Ikx5q…
G
Is this the actual GTP-4 or ChatGPT-4? They may seem identical, but ChatGPT-4 is…
rdc_jdiwm6o
G
@ey1414u ain’t got to be „technical“ to type two prompts for a badly generated a…
ytr_Ugx-oqeIF…
G
@coffeeperson1461 if the ai already understands what is toxic, it's no problem t…
ytr_UgwyMvKLG…
Comment
Hmm, Then you look at Apple research creating unique reasoning puzzles. And all AI models completely fail to solve a single thing.
That says Superintelligence is far away.
But specialised AI that can do specific tasks will be (is) much better than humans
youtube
AI Governance
2025-09-04T09:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_Ugz3TppD5OBKB5jN7AB4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgwYbHpGX6vwznztl9p4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugzne3CZ_mDTh7JDQ6h4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgwJxQw5IQzExr1a0754AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},{"id":"ytc_UgyjYp9ahfZNMhOKHcZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},{"id":"ytc_UgycXxmwqs0S6PbERE14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_UgygkbZm3TM7L6cq5LB4AaABAg","responsibility":"distributed","reasoning":"deontological","policy":"regulate","emotion":"outrage"},{"id":"ytc_UgwYtqignhFtHS418jN4AaABAg","responsibility":"ai_itself","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwOGjEL3VtgFVQqdBZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},{"id":"ytc_Ugx-jIocSklBRfD6pXV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}]