Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
AI is new in this realm of things. How can they rely on it to make a house arres…
ytc_UgzdkHi9J…
G
Been saying this for the longest! Anyone who has ever worked in corporate Ameri…
ytc_UgwCuZ6JK…
G
If the primary purpose was to wage war, then AI too smart and we too stupid, we'…
ytc_Ugy8d6s-o…
G
elon musk: lets halt ai development for six months!
also elon musk: presenting t…
ytc_Ugy05o6c4…
G
From the way she explained the beginning part, it was almost as if she’s trying …
ytc_UgywWyUuf…
G
I feel like the issue is that people want the AI to act more like a human.…
ytc_Ugzo949ok…
G
Even people might be able to plagarize (idk how to spell this fucming word, so c…
ytc_UgytpiUy6…
G
No, it has made it so no skill is required because a company has no need to hire…
ytr_UgyALGue7…
Comment
This is not how LLMs work. They construct responses by predicting the most probable next word in a sequence, one token at a time. This process is based on statistical patterns learned from vast amounts of training data, not human-like understanding. The specific word is selected using decoding strategies that interpret the model's probability distribution over its vocabulary. It is not "thinking", just predicting the next word in a sequence. During training, the model analyzes relationships within this data—such as how words typically appear together in context—and uses that understanding to predict the next most likely word when generating a response, one word at a time
youtube
AI Harm Incident
2025-09-11T14:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | none |
| Emotion | indifference |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgzLXq27qfxEo4ZbGdF4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgxXS8pXVsZrBByBlgR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwQQlUVxmOjLUpknrN4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgxApgcM2b96LJ-BBhB4AaABAg","responsibility":"developer","reasoning":"virtue","policy":"industry_self","emotion":"approval"},
{"id":"ytc_UgzQJS8_2OmLGOZijQl4AaABAg","responsibility":"user","reasoning":"deontological","policy":"unclear","emotion":"outrage"},
{"id":"ytc_UgzagsyZaKUS66iwQXB4AaABAg","responsibility":"distributed","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgxnryVMeqVS5W81F2d4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"indifference"},
{"id":"ytc_Ugzzk1xlZy9b5d78vuF4AaABAg","responsibility":"government","reasoning":"consequentialist","policy":"regulate","emotion":"outrage"},
{"id":"ytc_Ugz7danUOFL9GATByKV4AaABAg","responsibility":"developer","reasoning":"mixed","policy":"regulate","emotion":"mixed"},
{"id":"ytc_Ugy0ZefBS7Z9cq9lplR4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"ban","emotion":"outrage"}
]