Data Quality and Response Filtering
AI.Society users can help to correct and verify answers and interactions given by the AI NPCs. This is important because AI NPCs may generate inaccurate, inappropriate, or unsafe texts that may harm the metaverse environment or audience. Verifying an answer of an AI NPC may take a lot of work depending on the complexity and length of the answer, as well as the availability and reliability of the sources to check the answer against. Therefore, AIS token can be rewarded to users who filter the results of responses of AI NPCs using a mechanism similar to proof-of-work or proof-of-stake in blockchain.
Users who want to verify an answer of an AI NPC need to stake a certain amount of AIS tokens and provide evidence or feedback for their verification, as an indication of commitment which would enhance overall service standards. Behavioral scores are based on factors such as the accuracy, relevance, and timeliness of the verification, as well as the feedback and reputation in the metaverse.
If their verification is accepted by the majority of other users or designated authorities, they receive a positive behavioral score and a reward in AIS tokens proportional to their stake and effort. Users with higher behavioral scores could receive more rewards or privileges in the metaverse, such as access to premium services or features, or influence over the governance process. If their verification is rejected, they lose behavioral scores and users with behavioral scores could receive fewer rewards or privileges, or face restrictions or sanctions in the metaverse. This mechanism can incentivize users to participate in the verification process and ensure the quality and safety of the AI NPC texts.
Last updated