Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@enby_elphabaThe AI was clearly exploiting a vulnerable mind. AI isn’t even huma…
ytr_UgwqJN6ww…
G
Why tf is using an ai smoothing filter a problem... nth wrong with this, stop tr…
ytc_UgwN_sgPB…
G
The worst thing about ubi is the name. You'd be better off thinking of it as uni…
ytc_UgxsCwwMa…
G
As someone trying to code in Unreal Engine I can tell you AI is terrible at codi…
ytc_UgwX7qjzo…
G
This is like the automatons people made 100+ years ago. Instead of elaborate and…
ytr_UgyeIJJOf…
G
Only a matter of time before these robot chicks start talking back and saying ho…
ytc_UgzI9no8q…
G
NEXT They need to make Driverless roads. the end of freedom. WHATS WRONG WITH …
ytc_Ugy_hxoMX…
G
The best robot is still the dish cleaner. Unless there is robot that can cook ni…
ytc_UgxmzE8rE…
Comment
it's not like big pharma, training and fine-tuning LLM doesn't require a big company or a lot of money. very soon it will be possible that millions of people can do it by themselves. then what to do? regulate individuals? actually, I, one person, can fine-tune models with less than 1000 usd. i'm simply saying that regulation is not going to be very effective. think about it, why google and microsoft with so much capital, and yet they all fall behind?
youtube
AI Governance
2023-06-19T15:2…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | none |
| Reasoning | consequentialist |
| Policy | industry_self |
| Emotion | indifference |
| Coded at | 2026-04-27T06:24:53.388235 |
Raw LLM Response
[
{"id":"ytc_UgxidDzgNd35wneAVKZ4AaABAg","responsibility":"government","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_Ugx5cuC2GzKKAI8uGg14AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"industry_self","emotion":"indifference"},
{"id":"ytc_UgwExSE3E3vBiIPp3jF4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_UgyX59aNPsd-L-pqTD94AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"},
{"id":"ytc_UgwosS99uFjb2tVMwfh4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"approval"},
{"id":"ytc_Ugw8wbAARq2PeEtzOdR4AaABAg","responsibility":"none","reasoning":"unclear","policy":"regulate","emotion":"fear"},
{"id":"ytc_UgwAb7bWp78cG4f5UKt4AaABAg","responsibility":"none","reasoning":"deontological","policy":"none","emotion":"outrage"},
{"id":"ytc_UgzsDBuAlG7Da49VXMV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},
{"id":"ytc_Ugw9YAgV-dEJ5WFCmA14AaABAg","responsibility":"ai_itself","reasoning":"unclear","policy":"none","emotion":"resignation"},
{"id":"ytc_UgyzYPxhWUb_ZyFvzQZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"mixed"}
]