Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Don't forget to mention that you have your fail safe. Should anything happen to …
ytc_UgwVxha6e…
G
We appreciate your feedback. If you have any specific concerns or questions abou…
ytr_Ugx57xc2K…
G
I will pay, however much it takes to have a robot. Pick up all the dog shit in m…
ytc_UgwpCPHl7…
G
ENGINEERS TO BE AUTOMATED?? no way, engineering is all about creativity and deve…
ytc_UgzmK2RP1…
G
The future of humanity is by no means certain. But we need not fear the awakenin…
ytc_UgzF205MR…
G
Go to a major populated city during rush hour in South East Asia. True test of s…
ytc_UgxXvvTD8…
G
he doesn't know sh, he called llm large learning model, doesn't even know it sta…
ytr_UgxcQq8G2…
G
my mom suggested we go to a carls jr nearby that has an ai that takes your order…
ytc_UgykwCjKh…
Comment
If you had lived through the very beginning of personal computers in the 70s and early 80s, like I did, you would have a completely different opinion! What we all now take for granted when we use our smart phones and laptops obviously didn't exist then. We were going from one bug to another... and you better had to be able to intervene on the software yourself! A technology needs some time to mature.
Same thing with autonomous driving. You just CAN'T take the present numerous bugs this technology still present, as representative of the future in ANY WAY! Especially if you talk Waymo or Cruise, two technologies that have no future. These bugs will get ironed out, quicker that you can imagine, and autonomous cars will be an order of magnitude or two safer than the average human driver! It is inevitable! There is no question of "if", only of "when"... next year? in 2 years? Hardly more...
youtube
2024-11-15T08:3…
♥ 2
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxC0WdgkGAWSjRagCZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgyxAQarRJrYqVpJ8rt4AaABAg","responsibility":"company","reasoning":"mixed","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgyQLoewFF5R2yb4VTh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw4o2VdrMOKTIg9tDt4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgwGNgUymP0_nVYypFR4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwFyP1HE-f8RnRR_T14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgwV2cOQm4Y4Rc7b9S54AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw_MM8-GkpEyXLke2B4AaABAg","responsibility":"none","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgyDREG-ULfUGJgZzDF4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_UgyDwQdL8Z4vH9jO8_d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]