Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Unless it does (and it won't, don't kid yourself), this type of argument is enti…
ytr_UgwRBexCT…
G
The answer is Yes, Don't feel happy by watching these videos. Already the AI rev…
ytc_UgzdnjI2E…
G
Is "Strong AI" still a thing? I thought it is properly called "Artificial Genera…
ytc_UgjXLgiUC…
G
old interview 😂, bro interview me, im the guy who made virtual humans and consc…
ytc_UgwDNfN8_…
G
There will need to be an appreciation that our economy is based on productive wa…
ytc_Ugy3p-KPU…
G
I understand how role playing works, and i understand that LLM's don't have a "f…
ytc_Ugw6me7k1…
G
This is a bad analysis. While yes, a lot of artists do mimic other artists while…
ytc_Ugx8SkEwt…
G
Here is the thing: AI may advance over proportionally but there are two major is…
ytc_UgxyqlVwq…
Comment
I do think it is worth noting the difference between 1000 crashes happening with autopilot on (and how many of them could have easily been avoided had there been human intervention, and the large number of human caused accidents in the same time period. AI will only get better, if they ever add lidar then instantly better. Yet humans as a whole wont get better at driving.
Not trying to take teslas side or anything, just pointing out that humans cause accidents too, and with the conjunction of humans intelligence and constant computer processing of AI that is when peak safety will probably occur.
Imagine the human doing the steering and AI doing a dynamic cruise control, speeding gets cut instantly (obviously an override with the brake pedal just like normal cruise control).
youtube
AI Harm Incident
2024-12-21T15:4…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | distributed |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytc_Ugy2_1uSyaq8TM70Cyd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgyKUsP06B9SavJTUC94AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugz_1cocp58SL-YM35h4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgxfAgU_hlzI1Z57KmF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugzl6kUoFMNhzCv8Aqt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgxrY_Yx1nBCwZPngQt4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgyP-2BHBGnLuM8-lER4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwpSaFnZvIZDzCQlQd4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugzw7J4GRX4g11I243p4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyC49DKDQFe7J4vr3R4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"mixed"}
]