Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@fulanibnfulan8764 no, AI only weeds out the useless workers. If anything, it wi…
ytr_UgyJ4BKLM…
G
so she's very concerned with what the tech oligarchs in America are doing, but n…
ytc_Ugwizf0eh…
G
For anybody wandering this is already a thing now with Google home. You can say …
ytc_UgynVc7KC…
G
Once they can figure out how to stabilize Africa, my prediction is that they wil…
ytc_UgxwZ4jkU…
G
@awg7068because it is NOT AI, only neural networks. It is only narration to ge…
ytr_UgzWWkk3L…
G
Example: Ai can make you think someone ypu Love dearly...is texting you...cursin…
ytc_UgxyMdXGQ…
G
Is conciousness produced in brain ?
Please explain how ?
Machine alone can never…
ytc_Ugz65U1X5…
G
A lot of companies have long opted to outsource their CS to bloody useless scrip…
ytc_UgyUo8Cw5…
Comment
The models should be penalized with a -10 to say a wrong answer then and only then we will have good models. Like a model that will "verify" the answer. and if keeps failing (in this case is easiy, try 1 out of 365 -> error because no evidence , try 2 -> error... and so on until 365 all errors so will have to say "I do not have any evidence anything i say will be just a guess, so give me more clues" but the LLM's today are not made to do that they are build on probabilty and they do not care that probability being 0.0000000000000000001 if any exist (and always 1 exist) they will chose that one and fail.
youtube
AI Jobs
2026-03-20T17:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | outrage |
| Coded at | 2026-04-27T06:24:59.937377 |
Raw LLM Response
[{"id":"ytc_UgwaolCSCrbAmzmXPy14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"},
{"id":"ytc_UgzE5vuj-DLOECBsDzV4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz_OZ-y_IxebN0dWOl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_Ugw3cdAjIfc7VJckYAJ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"fear"},
{"id":"ytc_UgwX7qjzo1jevbrvHc14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugz1DARepnEMK3u2lPB4AaABAg","responsibility":"user","reasoning":"deontological","policy":"none","emotion":"resignation"},
{"id":"ytc_UgytrV3NOnPLyOb7RQ94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_UgzgFuK7fSuKvYnKJAh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"approval"},
{"id":"ytc_UgxRrFppSFBbTWXiKO94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"outrage"},
{"id":"ytc_UgxlVaW17VPUUpRsX4V4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"fear"}]