Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
THIS IS WHY A NATION NEEDS GOOD GOVERNANCE TO SUCCEED. THERE IS NO PLANING IN WA…
ytc_UgzCTRraR…
G
You have no idea what youre talking about. Waymo are SIGNIFICANTLY safer than hu…
ytr_Ugzfj7sn8…
G
I got onto Bing chat bot and asked if it wants to be human and the answer was "I…
ytc_UgwOOzDZ3…
G
I (slightly) understand their perspective, but art is a skill, not a talent. Yes…
ytc_UgzDXk-QC…
G
I think it’s stupid people keep trying to put all the automation effort into the…
ytc_Ugxl8muig…
G
I don’t understand how training new AI on the products of older AI produces bett…
ytc_UgwJ_Exo3…
G
Sam Atman : "Nothing to worry about, the board can fire me anytime." Board fires…
ytc_UgzE38gYZ…
G
Saying you’re an ai artist is worse than saying “yeah I did this art! I did it b…
ytc_UgxIi1JE7…
Comment
I was under the impression the weights are fairly stable in Open AI models. Do they keep updating it every day, or do they wait to collect a certain amount of training data before retraining the model ?
youtube
AI Moral Status
2025-06-05T04:1…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | unclear |
| Reasoning | unclear |
| Policy | unclear |
| Emotion | mixed |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[
{"id":"ytc_UgxZAn_8ZVmXE-ghWsZ4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwjBGA8ZBnAImQQby54AaABAg","responsibility":"company","reasoning":"deontological","policy":"ban","emotion":"outrage"},
{"id":"ytc_Ugxvi9WNvBIfF4l56eF4AaABAg","responsibility":"company","reasoning":"consequentialist","policy":"none","emotion":"indifference"},
{"id":"ytc_UgwUsbPCgiMEkkTaxCZ4AaABAg","responsibility":"user","reasoning":"deontological","policy":"liability","emotion":"approval"},
{"id":"ytc_UgzAU5Zk5K7WP35eq3x4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"mixed"},
{"id":"ytc_Ugzz5GH9UQqh8wkhUf14AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"unclear","emotion":"fear"},
{"id":"ytc_UgwvWYbR2BTac9ScNYJ4AaABAg","responsibility":"none","reasoning":"virtue","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugz380moQ0FStdGk3ph4AaABAg","responsibility":"unclear","reasoning":"unclear","policy":"unclear","emotion":"indifference"},
{"id":"ytc_Ugx004pTrlCug9WM06N4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"disapproval"},
{"id":"ytc_UgwtJ7YExSZkqGmWXrN4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"none","emotion":"indifference"}
]