Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
The surest sign of how problematic this AI takeover is going to be is that NOBOD…
ytc_Ugz8QC-iB…
G
Albania's Diella is showing us that even politicians themselves are not safe fro…
ytc_UgxqdjY0B…
G
No, it absolutely wouldn't render the argument null. Searle knows what he is sug…
ytr_Ugh9zENUy…
G
Maybe before you have any major implications of an AI model that model must go t…
ytc_UgzptG1tx…
G
For image editing and even some image generation, I now mainly use Gemini’s Nano…
rdc_njhr9cy
G
@enby_elphabaA.I is a problem in the world rn, and its one of the many, many MA…
ytr_UgweQ5nJW…
G
Whoever controls the robots, wins.
This is the future.
If you don’t think hum…
rdc_jkg5u03
G
South Africa has 60% unemployment already, so with AI, I guess it will be 100%.…
ytc_UgwRVIisW…
Comment
Has anybody here been using FSD(Supervised)? Also, Do you know anything about training an AI model to do this? Yes, the years of diver miles up until very recently were essentially drivers doing the training of the model in real world situations. It should have been advertised as such. I used it about 95% of the time. You cannot be complacent. I also have had no incidents. We are always evaluating the decisions that are made. It has reacted and gotten us out of some very hairy situations. I don't expect to stop paying attention to what it is doing for a long time. I only subscribe, I did NOT pay up front for it.
youtube
2026-02-24T23:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | user |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | approval |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgyoZ0LyD7csfwHar6B4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwYr5b79PW0d65mBJJ4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzfq2DLzjjYbzl5kOp4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgzxjlMpkFJiSq2Gnix4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugz3H-QUuENUqHEdWM14AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzeXb-bZvf1bvok8sl4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgzIcFoix72lPgzZwUp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},
{"id":"ytc_UgzhYcY58bEaHC8oyFF4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"},
{"id":"ytc_Ugz6V6QvByYcJp2VS594AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"mixed"},
{"id":"ytc_Ugy0oCNPLtNa2ZzuPxJ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"fear"}
]