Raw LLM Responses
Inspect the exact model output for any coded comment.
Look up by comment ID
Random samples — click to inspect
G
@Someinsanegenuis looking at this from the perspective of a software engineer. …
ytr_UgwfX3kOi…
G
>Carey manipulated, or deepfaked, photos of more than a dozen women, includin…
rdc_k20ei2r
G
Computers and printerd destroyed a typewriter's job. Keep crying about AI. Stop …
ytc_Ugy4tzzS9…
G
Honestly how many of you wants to talk or chat with an AI bot instead of a real …
ytc_Ugx-aCfZ6…
G
The AI will take over the world and it's due to greed from a selective group of …
ytc_UgykK8ccW…
G
The technology behind AI actually started in the medical field with medical imag…
ytc_Ugztb2sux…
G
AI is starting to look like a really glorified version of intellisense. At the e…
rdc_mjtbdit
G
if an AI does something "immoral" while knowing that it's wrong then it's a prob…
ytc_UgzzgQh1g…
Comment
interesting how the inevablility of doom scenario seems to be pushed to the margins white highly inprobable utopia is talked about disproportionally. Roman seems like a super smart guy but I don't think you need to be one to not only see the danger but arrive at a conclusion that end of human civilization within the next 100yrs (probably much sooner) is almost certain. It is much easier to list optimistic scenarios since there are so few (also a mix of the following could help): 1. cataclismic regression of tech as consequence of a war or natural disaster on global scale 2. demographics: not enough young people to keep innovation and/or necessary market forces going to reach AGI 3. bulterian jihad: global raise of social movements preventing further progress, perhaps in style of spanish inquisition 4. autonomous sentience and free will is much harder then expected or possibly not viable in machines. None of these is probable to take place quick enough imho. Soooo, enjoy life for now. There is probably nothing we can do.
youtube
2024-07-09T06:0…
Coding Result
| Dimension | Value |
|---|---|
| Responsibility | ai_itself |
| Reasoning | consequentialist |
| Policy | regulate |
| Emotion | fear |
| Coded at | 2026-04-27T06:26:44.938723 |
Raw LLM Response
[{"id":"ytc_UgxNAuHqR4kFP3s8LLZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgzwyTPpQLsMN3hEfHl4AaABAg","responsibility":"ai_itself","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxmxCieybmA-WGAg8l4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"},{"id":"ytc_UgwrEtG_DGRYRRDGxxt4AaABAg","responsibility":"user","reasoning":"consequentialist","policy":"liability","emotion":"fear"},{"id":"ytc_UgyXMbSGUNiwtr594jh4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"approval"},{"id":"ytc_UgzFrMAY_PmD5nzzbEp4AaABAg","responsibility":"none","reasoning":"unclear","policy":"unclear","emotion":"indifference"},{"id":"ytc_UgwW_dIyCXOQ6CVOXaZ4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_UgxdJI60BOZYn8TEn2h4AaABAg","responsibility":"ai_itself","reasoning":"deontological","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugwa1hfCbh78PildQrV4AaABAg","responsibility":"none","reasoning":"consequentialist","policy":"regulate","emotion":"fear"},{"id":"ytc_Ugy3k6nSnFnDY5zYURZ4AaABAg","responsibility":"none","reasoning":"unclear","policy":"none","emotion":"approval"}]