Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
what kind of expert is this, you guys are missing the point!
Superintelligence …
ytc_Ugw27x6MH…
G
I go into detail regarding the implications of a lot of this new tech in my news…
rdc_jhch6vj
G
I'm probably not be deep enough into AI but to me as an informatics student AI i…
ytc_UgwYcPaYT…
G
AI will never be conscious. First of all it isn't even intelligent. That is wh…
ytc_UgxNSShD2…
G
They should be kidz 1st of all, learning by exploring the world. Than there are …
ytc_UgwpPPQU_…
G
well I called it. when this AI thing first started I said that it would not be l…
ytc_Ugwjiy5BI…
G
Great work on poisoning AI and I'm all for it, I think a lot more creatives shou…
ytc_UgxED50fJ…
G
No limits to ai as long as compute continues to get cheaper and the economy cont…
ytc_UgxHqc_sq…
Comment
Yes and no. The current issue with AI is that if it gets a right answer, it is amazing, literally amazing. More often than not, in my experience, it has a tendency to give the common answer and very often a wrong or partially right answer. It is missing a critical skill to self identify its own mistakes and peer review from another AI just isn't there to identify the missing pieces. It will get there, but I'm afraid there will be a significant mistake at some point that may hurt lots of people. However, I am grateful in this case a child was helped when the answer was not being found.
youtube
AI Jobs
2024-04-15T04:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | mixed |
| Policy | none |
| Emotion | mixed |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgzNDCYVVqRq-79xoDp4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgxEqduiR_12Jk_l2s94AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyU52jgSYmBl-AgDHh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"mixed"},
{"id":"ytc_UgxmFDFOS8XVWHfm2a14AaABAg","responsibility":"developer","reasoning":"mixed","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx9pxrSzcP5cETMpvZ4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgyDO_3uJ7Fbj8EyGll4AaABAg","responsibility":"ai_itself","reasoning":"mixed","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwANgwgry-xe4msn9d4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgzLB5hxKnOARDTAWot4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"approval"},
{"id":"ytc_UgwH3LwZ36W2AeyiLdV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"resignation"},
{"id":"ytc_UgzF33jT8f3OmhtD-mN4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"fear"}
]