Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
it's a story of pure unadulterated raw geopolitical power... that's what the ai …
ytc_UgyQALE9p…
G
I admire Neil but this time I have to disagree with him. All scientific advancem…
ytc_UgxdybQwr…
G
I like art to have nice looking things, i dont care about the "meaning" of it, i…
ytc_UgzHBXbZQ…
G
Well real humans backstab, lie or just want something from you at the end also n…
ytc_Ugwqoni7F…
G
Man I just know once ai gets advanced enough man the chatgpt pillows gonna go c…
ytc_UgwRMhcd-…
G
I think if we do free up a lot of labour by automating stuff we could do a lot o…
ytc_UgzCNZC0D…
G
LLMs cannot yet consistently produce high quality code but not for the reasons y…
ytc_UgxFPcx0A…
G
While we will likely lose many jobs to AI, there are still tons of jobs out ther…
ytc_UgyyATDaG…
Comment
@francomtz7115it updates all the time. The point is if human drivers make same mistakes over and over, it's not the case for software. If humans cause X accidents per year, we may expect about the same number of accidents of same type next year. While for software each accident can lead to software improvement and update for whole fleet to prevent or reduce frequeency of that type of accidents in the future. SW will eventually outperform humans like it did in chess, if it's withing capability of hardware and algorithms, and even if we are not there yet, both are improving very fast.
youtube
AI Jobs
2025-05-30T21:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | none |
| Emotion | approval |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[
{"id":"ytr_UgxqpAJJXtRrTUIPrOR4AaABAg.AIjJCobbxm4AIl0MbkhPH7","responsibility":"company","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytr_Ugwig6wAkznhgWpocwR4AaABAg.AIjG6le0FjCAIjOtJvGtd8","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytr_UgxskYKpPF1Wt3fisOV4AaABAg.AIjBlLijnKbAIlEBpQiiub","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_Ugx9hAsAFZsRDvoVaQt4AaABAg.AIiuYUdLv14AIjGfgLFFxO","responsibility":"government","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugx9hAsAFZsRDvoVaQt4AaABAg.AIiuYUdLv14AIjXYx9O9-G","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytr_Ugy6IsCJbFPF-wW1Zgd4AaABAg.AIif4Fchqd4AIimvW-_D8G","responsibility":"company","reasoning":"virtue","policy":"none","emotion":"resignation"},
{"id":"ytr_UgxdTXr9Vwdh00xPE1p4AaABAg.AIibBEpZ7FrAIj8cydYudG","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytr_UgxdTXr9Vwdh00xPE1p4AaABAg.AIibBEpZ7FrAIkN7NJWoEF","responsibility":"government","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytr_Ugya4l-x8nU_2NEcghl4AaABAg.AIiZUmsUzgvAIjBYeDRgJi","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},
{"id":"ytr_UgxKUcHOW9I-GFIkHJl4AaABAg.AIiPf41A9xnAIiqI05PLdT","responsibility":"company","reasoning":"consequentialist","policy":"liability","emotion":"fear"}
]