Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Edit:
Why do people not like ai? Because you’re all hopelessly stuck in the pas…
ytr_Ugzf6X4kl…
G
Humans might need to merge with AI because we’ll be too dumb to function the way…
ytc_Ugw3NxuyR…
G
Oh and as mentally disabled person... Nah. AI isn't art. It's not helping anyone…
ytr_Ugzzyz-yY…
G
The algorithm just favors something that can post real fast and lots of "content…
ytc_UgwT1iuHU…
G
If only we were warned about AI over and over for the last 35 years.…
ytc_UgwDVqDbu…
G
Nov. 22, 2019 For me there is a Cold War feel to events. China is gaining contro…
ytc_Ugxm642An…
G
Why not, Sand a robot to gaza.. maybe israel can give you much more money to ga…
ytc_UgwAdi28J…
G
You are just robot with an air strike of missile will be destroyed. Remember you…
ytc_UgzMNuT_k…
Comment
Larger models are mostly empty void solution search space. Therefore mostly a waste of compute. Focused models are more powerful and less expensive and results based. Taking a LLM or FM LLM by itself without a lot of harnessing and or fine tuning is going to give within distribution generic slop. Remember we are building large language models on tiny datasets. The internet is so small it is less than 1% of data and they are not using all of it, they are using a small percentage of that. I say 1% because data is constantly growing, it is not 5% fixed. We produce more each year than decades when it started to grow.
youtube
AI Responsibility
2026-02-13T21:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[{"id":"ytc_UgyS-NM7-dras6wOkcx4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_Ugz5mEQG8Xej7xAbRqN4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"mixed"},{"id":"ytc_UgxHWErflSBabNRMvBF4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_UgwwKRpKDB0VAFM6lPl4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"resignation"},{"id":"ytc_UgwWGJsQCFl2ma5bTF94AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzatYnP7Lfiozrpadh4AaABAg","responsibility":"user","reasoning":"virtue","policy":"none","emotion":"outrage"},{"id":"ytc_Ugyte8-PQGpfeBV_vyV4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"industry_self","emotion":"approval"},{"id":"ytc_Ugx382hSG92YWzzZIQt4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgxZBtRnZeZ-1kNUMNp4AaABAg","responsibility":"company","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgzE6rU3eg-EFhLpXcN4AaABAg","responsibility":"company","reasoning":"deontological","policy":"liability","emotion":"outrage"}]