Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
I work a lot with AI, it is a very good and useful tool in our human tasks. Perf…
ytc_UgySlXL1k…
G
As a traditional artist, i can say i dont like ai. And i personally dont do digi…
ytc_UgxsEEB8j…
G
did we just show a chatbot images of hitler and it turned into a nazi by 4chan o…
ytc_Ugw1vq0HR…
G
Money speaks .
Previously it took western so called ally 50 years to provide po…
rdc_lubo9gw
G
@username7763 no, but having lots of money does let you dump that money into a h…
ytr_UgxKCgf8J…
G
i am not that well versed in AI but if u write scripts that tells chatgpt to shu…
ytc_UgxG_E-xE…
G
I think this interview is interesting. What I remain entirely unconvinced about,…
ytc_UgxobHMDH…
G
Maybe the "second coming" was always AI. Think about it. AI will be able to know…
ytc_UgzXFf67C…
Comment
What's bothering me is the transition phase.. we aren't in FSD yet so people who want to relax behind the wheel have to monitor the road etc..
But they dont understand it's not FSD, they have bad reactions (swerving instead of braking, sometimes not even reacting at all) or just they let the autopilot do everything and not monitor anything.
At least in France, i see people with manual car have way better reactions than people with semi-autonomous ones. They know if they misjudge the situation, don't break, don't do anything, the car will just continue and crash.. you can even see in the US videos, so much Tesla accidents could have been avoided if the owner really knew how to drive/was fully* aware of the situation. That's sad
youtube
2022-01-04T12:5…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | fear |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgzqzqZCuaxc2Xi3Rhd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugw7VkeZfk6iMyLHMdx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzYyo3j8GDhqvKlEVR4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxexkQg6tZdTYpmKgd4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxC0uBDLUrtMBMyFZx4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_Ugw_CFZRHmgDJIkCXKR4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugzix9MI7_81v0ODBxh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzEuAJQ1XsUCCnexfp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytc_UgxdZWdMNE5pYggbp3x4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"indifference"},
{"id":"ytc_UgxN98Cl05jycq5Zf1l4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"}
]