Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
Yes the "they take our jobs" Argument and frar is super old.
But in the past tim…
ytc_UgwSAGss2…
G
20. Truckers need to get their in bread azzes over to the truck lane.
19. When …
ytc_Ugz31IiUS…
G
Yes, my daughter worked a project to help answer question the AI Could not. I sa…
ytc_Ugz90AY0A…
G
Just imagine if this guy is cheating on his wife and he invented this elaborate …
ytc_UgxwG0XEU…
G
Honestly we should stop calling it AI Art because it's nowhere close to an art. …
ytc_Ugy9Uew2T…
G
It’s truly insane this is actually happening. There is zero rationalization when…
ytc_UgzMMcxiB…
G
@EDawgHillGD you have no idea what I want to say but ai would read it and use i…
ytr_Ugy0oKcgB…
G
I’m honestly shocked this is just now happening with how ChatGPT constantly seek…
rdc_nnl5u2t
Comment
If 100 which the guy says is true, and seeing the advances of open AIs models , we can assume that with the rate of growth from 176 billion parameters(gpt-3) , 1 Trillion parameters (gpt-4) , 17.6 trillion parameters (gpt-5) , according to this rate of advancement gpt-6 or 100% gpt-7 will be more or equally intelligent as humans , and if we see the time span of these models being released with the rate of these advancements and also adding the hype about the technology , reaching gpt-6 or 7 will only take no more than 3-4 years.
youtube
AI Governance
2023-05-29T22:3…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | consequentialist |
| Policy | unclear |
| Emotion | indifference |
| Coded at | 2026-04-26T23:09:12.988011 |
Raw LLM Response
[
{"id":"ytc_UgwRlshBiVyBiajSq_p4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_Ugz3NKZSDERplzEJnrZ4AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"unclear","emotion":"indifference"},
{"id":"ytc_UgwwOqiqWGt4uy5YmPR4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"liability","emotion":"fear"},
{"id":"ytc_UgzyhS9AMhGIFZKFl714AaABAg","responsibility":"unclear","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_UgwR-LylRvtftBI1mul4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyVTNYUGAdGuRDBrjl4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"mixed"},
{"id":"ytc_UgzUJgckQKiZTOfbwRp4AaABAg","responsibility":"developer","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgyzG5-FxsjXwzbUJj94AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"unclear"},
{"id":"ytc_UgxFwW-IsxI2QnnvFB94AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"ban","emotion":"fear"},
{"id":"ytc_Ugyczu-wKOr16Rfainp4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"none","emotion":"resignation"}
]