Marking functions: SPAM, SCAM, BOT, FAKE and the role of the ccfound platform in the Internet space.

What in my understanding is the CCFOUND platform? It is like a multicultural encyclopedia, a kind of educational data bank in every field. Relating this to the traditional understanding of knowledge, it is such an edifice of knowledge containing several different floors (zones). And so: * 1 zone is the daily press section, ie the "News" module on the platform; * 2nd zone is a part of the "Knowledge" module, such a public library containing generally available and free content; Each question asked is the cover of an empty book. Each answer is a separate chapter of this book written by a different author. Each comment is a shorter or longer review of the chapter (replies). Tags can be understood as individual departments in a library of knowledge understood in this way. The channels, if they appear in the future, can be compared to library shelves where all publications of one person are gathered in one place. * 3rd zone is a part of the "Knowledge" module (or, in the future, another module) referred to as a multimedia bookstore, in which there are paid questions and other types of paid educational materials, arranged in a similar way as in the library zone. And now, with such an innovative understanding of the role of the portal in the Internet space as a multicultural and multinational encyclopedia of knowledge, is there a place for spam, scam, bots and fake news? Is not enabling platform users easy access to this type of tools is not going to be an easy going and allowing the perpetuation of not necessarily desirable habits present in other social media on our platform? Getting down to business. What is SPAM ?? I will repeat after the explanations posted on the Netia SA website. This is the name of the American luncheon meat, which was sold under the name Shoulder Prok and Ham. But we will be interested much more in the translation: Stupid Pointless Annoying Messages, which can be translated from English as stupid, pointless and annoying messages. The specific features of SPAM include: * mass nature, * lack of personalization, * e-mail messages, SMS messages, as well as messages sent via instant messaging and social networking accounts, * often illegal activities (especially when SPAM aims to infect system), * attempt to mislead the recipient, * use of social engineering methods. Knowing what SPAM is and following the above-mentioned understanding of the role of the platform in the Internet space, I asked myself: How can you spam an encyclopedia at all? Will any question asked on the platform meet the spam criteria? After all, no question is directed to anyone personally, it is not sent en masse to specific profiles using social engineering methods of manipulation. Remember, there are no stupid and meaningless questions, but there are often those that inspire deeper reflection and rethink your perception of the world and surroundings, which may seem annoying to someone because you cannot ignore the issues raised in such questions. There should be a system for identifying forbidden content on the platform, but without reflection, enabling individual users to arbitrarily decide what should be found in the encyclopedia and what should not be libraries and tear pages from books that they find unnecessary or annoying. Isn't that one of the more aggressive forms of censorship and stigma that our platform is going to fight against? Currently, on the platform, we mark someone as SPAMER or the question posted by him as SPAM, without providing substantive justification. It is like going to a library or bookstore and pouring red paint over selected books or bookshelves of a given author, guided by a subjective "whim". Would you like to buy or borrow such messy books? Example: If someone writes a question: "Is it true that: 2 + 2 = 4?", And someone considers it stupid and nonsensical, i.e. SPAM, and a hundred other people vote that it is SPAM, will it mean that such a "vox populi" will change the rules of mathematics. Here, there may be another, much more dangerous danger (more on that later), ie that someone may consider this question a SCAM, i.e. a scam !!!!!. What will happen if many people support such a label? Will we have a revolution in science and the need to recognize outstanding scientists as frauds? If at the moment we cannot make a conceptual revolution on the platform, then at least some form of presumption of innocence should be introduced, i.e. if someone clicks to mark the content as SPAM, then a window is absolutely opened, where it substantively justifies such a choice, then this information is sent anonymously on the profile of the designated person, where he can refer to the allegations, without personal references, and only in the next step other FOUNDERERS vote, having a fuller picture of the problem, seeing the arguments of both parties. Otherwise, we will lead to a situation in which the classification of a given content or person to a specific category will be determined primarily by subjective factors. Now it's time to move on to an even bigger caliber. SCAM, what is it? In short, it is a fraud, a scam, a machination that consists in instilling trust in someone and then using that trust to extort property or other sensitive data. SCAM is a multidimensional concept, considered in terms of social, sociological, ethical, moral or purely technological. However, we forget about the most important, legal aspect of this phenomenon, SCAM, like any fraud, is a crime punishable by severe imprisonment. Hand in hand with this issue is inextricably linked to another legal concept, that is, slander. It occurs, inter alia, in a situation where one person unjustifiably accuses another of fraud. It has become common to think that we are anonymous on the Internet and therefore we can more boldly comment on the actions and attitudes of other people, without being aware of the consequences (especially legal) of our actions. In my opinion, the platform should approach this issue more thoughtfully. In this case, the result of providing the functionality, consisting in marking one person by another as a fraudster, has far-reaching consequences for both these people and the platform itself. Permanently marking someone on the platform as a SCAMER means simply recognizing such a person in a public medium as a criminal, in violation of the right to be presumed innocent. Unless I am mistaken, it is within the jurisdiction of common courts to rule on someone's guilt or innocence. Similarly, I have not heard of a referendum being announced in the course of court proceedings in order to decide on guilt based on the result of the vote. Let us remember that not every area of life is governed by the principles of direct democracy. Now, for a change, trifles, i.e. BOTs, FAKE news. In the case of BOTs, i.e. applications whose task is to perform a specific procedure in a space intended for humans and simulating the behavior of a living user, my concern is the technological aspect. Put simply, whether any user of the portal, without administrator rights, will have sufficient tools to correctly assess that a given account is a BOT. So the order, as in the case of other markings, should be such that the signal first goes to the platform administrators, it is substantively verified, and when it is confirmed, the content or user is marked in a manner visible to everyone. However, in the case of FAKE news, at the time of marking, it would be good to provide justification for two reasons. First, for educational purposes for all users. The second reason is the often mistaken assumption that every person spreading false news does it consciously and deliberately. Sometimes someone believes what he writes, and in this case, the role of other users is to use substantive arguments in order to make such a person rethink their worldview or look at a given situation or issue from a different perspective. In a formula with clearly indicated arguments for considering a given content as FAKE, all recipients who come across such content on the portal may benefit. Otherwise, the recipient will be asked why someone flagged the question or answer that way. Especially in situations with ambiguous or ambiguous content. Finally, there is the question of assessing the suitability of the content for a specific person (not necessarily for the person asking the question). Perhaps it would be worth considering the possibility that users could mark the level of usefulness of a given question and answer not by marking it as e.g. SPAM or FAKE, but by a gradation of likes. In such a variant, after clicking on the paw, a drop-down menu would appear with suggestions, such as: an unhelpful, indifferent, useful, very useful answer. Not to be confused with the concept of valuable, less valuable or worthless. Then, next to the question or answer, in addition to the number of likes, the number of "stars" or other graphic markers may appear. In such a situation, the number of likes will show, for example, the popularity of a given topic or author, and the marker ranking will show the usefulness of the topic of a given content for the community. Example from the platform: Someone on the portal asked a question about the investment profile. Among the answers was an argument about the investment portfolio. From the point of view of the questioner, such an answer is useless, and in extreme cases it can be considered annoying, useless SPAM. On the other hand, from the perspective of other users who came across the question about the investment profile, in particular those interested in the investment topic, the answer regarding the investment portfolio may be considered useful, not as a "bonus" to the answer regarding the basic topic, and as such liked. I encourage the other FOUNDERERS to express their views on my thoughts and conclusions. I would also like to get to know other points of view on the points I have raised.
What in my understanding is the CCFOUND platform? It is like a multicultural encyclopedia, a kind of educational data bank in every field. Relating this to the traditional understanding of knowledge, it is such an edifice of knowledge containing several different floors (zones). And so: * 1 zone is the daily press section, ie the "News" module on the platform; * 2nd zone is a part of the "Knowledge" module, such a public library containing generally available and free content; Each question asked is the cover of an empty book. Each answer is a separate chapter of this book written by a different author. Each comment is a shorter or longer review of the chapter (replies). Tags can be understood as individual departments in a library of knowledge understood in this way. The channels, if they appear in the future, can be compared to library shelves where all publications of one person are gathered in one place. * 3rd zone is a part of the "Knowledge" module (or, in the future, another module) referred to as a multimedia bookstore, in which there are paid questions and other types of paid educational materials, arranged in a similar way as in the library zone. And now, with such an innovative understanding of the role of the portal in the Internet space as a multicultural and multinational encyclopedia of knowledge, is there a place for spam, scam, bots and fake news? Is not enabling platform users easy access to this type of tools is not going to be an easy going and allowing the perpetuation of not necessarily desirable habits present in other social media on our platform? Getting down to business. What is SPAM ?? I will repeat after the explanations posted on the Netia SA website. This is the name of the American luncheon meat, which was sold under the name Shoulder Prok and Ham. But we will be interested much more in the translation: Stupid Pointless Annoying Messages, which can be translated from English as stupid, pointless and annoying messages. The specific features of SPAM include: * mass nature, * lack of personalization, * e-mail messages, SMS messages, as well as messages sent via instant messaging and social networking accounts, * often illegal activities (especially when SPAM aims to infect system), * attempt to mislead the recipient, * use of social engineering methods. Knowing what SPAM is and following the above-mentioned understanding of the role of the platform in the Internet space, I asked myself: How can you spam an encyclopedia at all? Will any question asked on the platform meet the spam criteria? After all, no question is directed to anyone personally, it is not sent en masse to specific profiles using social engineering methods of manipulation. Remember, there are no stupid and meaningless questions, but there are often those that inspire deeper reflection and rethink your perception of the world and surroundings, which may seem annoying to someone because you cannot ignore the issues raised in such questions. There should be a system for identifying forbidden content on the platform, but without reflection, enabling individual users to arbitrarily decide what should be found in the encyclopedia and what should not be libraries and tear pages from books that they find unnecessary or annoying. Isn't that one of the more aggressive forms of censorship and stigma that our platform is going to fight against? Currently, on the platform, we mark someone as SPAMER or the question posted by him as SPAM, without providing substantive justification. It is like going to a library or bookstore and pouring red paint over selected books or bookshelves of a given author, guided by a subjective "whim". Would you like to buy or borrow such messy books? Example: If someone writes a question: "Is it true that: 2 + 2 = 4?", And someone considers it stupid and nonsensical, i.e. SPAM, and a hundred other people vote that it is SPAM, will it mean that such a "vox populi" will change the rules of mathematics. Here, there may be another, much more dangerous danger (more on that later), ie that someone may consider this question a SCAM, i.e. a scam !!!!!. What will happen if many people support such a label? Will we have a revolution in science and the need to recognize outstanding scientists as frauds? If at the moment we cannot make a conceptual revolution on the platform, then at least some form of presumption of innocence should be introduced, i.e. if someone clicks to mark the content as SPAM, then a window is absolutely opened, where it substantively justifies such a choice, then this information is sent anonymously on the profile of the designated person, where he can refer to the allegations, without personal references, and only in the next step other FOUNDERERS vote, having a fuller picture of the problem, seeing the arguments of both parties. Otherwise, we will lead to a situation in which the classification of a given content or person to a specific category will be determined primarily by subjective factors. Now it's time to move on to an even bigger caliber. SCAM, what is it? In short, it is a fraud, a scam, a machination that consists in instilling trust in someone and then using that trust to extort property or other sensitive data. SCAM is a multidimensional concept, considered in terms of social, sociological, ethical, moral or purely technological. However, we forget about the most important, legal aspect of this phenomenon, SCAM, like any fraud, is a crime punishable by severe imprisonment. Hand in hand with this issue is inextricably linked to another legal concept, that is, slander. It occurs, inter alia, in a situation where one person unjustifiably accuses another of fraud. It has become common to think that we are anonymous on the Internet and therefore we can more boldly comment on the actions and attitudes of other people, without being aware of the consequences (especially legal) of our actions. In my opinion, the platform should approach this issue more thoughtfully. In this case, the result of providing the functionality, consisting in marking one person by another as a fraudster, has far-reaching consequences for both these people and the platform itself. Permanently marking someone on the platform as a SCAMER means simply recognizing such a person in a public medium as a criminal, in violation of the right to be presumed innocent. Unless I am mistaken, it is within the jurisdiction of common courts to rule on someone's guilt or innocence. Similarly, I have not heard of a referendum being announced in the course of court proceedings in order to decide on guilt based on the result of the vote. Let us remember that not every area of life is governed by the principles of direct democracy. Now, for a change, trifles, i.e. BOTs, FAKE news. In the case of BOTs, i.e. applications whose task is to perform a specific procedure in a space intended for humans and simulating the behavior of a living user, my concern is the technological aspect. Put simply, whether any user of the portal, without administrator rights, will have sufficient tools to correctly assess that a given account is a BOT. So the order, as in the case of other markings, should be such that the signal first goes to the platform administrators, it is substantively verified, and when it is confirmed, the content or user is marked in a manner visible to everyone. However, in the case of FAKE news, at the time of marking, it would be good to provide justification for two reasons. First, for educational purposes for all users. The second reason is the often mistaken assumption that every person spreading false news does it consciously and deliberately. Sometimes someone believes what he writes, and in this case, the role of other users is to use substantive arguments in order to make such a person rethink their worldview or look at a given situation or issue from a different perspective. In a formula with clearly indicated arguments for considering a given content as FAKE, all recipients who come across such content on the portal may benefit. Otherwise, the recipient will be asked why someone flagged the question or answer that way. Especially in situations with ambiguous or ambiguous content. Finally, there is the question of assessing the suitability of the content for a specific person (not necessarily for the person asking the question). Perhaps it would be worth considering the possibility that users could mark the level of usefulness of a given question and answer not by marking it as e.g. SPAM or FAKE, but by a gradation of likes. In such a variant, after clicking on the paw, a drop-down menu would appear with suggestions, such as: an unhelpful, indifferent, useful, very useful answer. Not to be confused with the concept of valuable, less valuable or worthless. Then, next to the question or answer, in addition to the number of likes, the number of "stars" or other graphic markers may appear. In such a situation, the number of likes will show, for example, the popularity of a given topic or author, and the marker ranking will show the usefulness of the topic of a given content for the community. Example from the platform: Someone on the portal asked a question about the investment profile. Among the answers was an argument about the investment portfolio. From the point of view of the questioner, such an answer is useless, and in extreme cases it can be considered annoying, useless SPAM. On the other hand, from the perspective of other users who came across the question about the investment profile, in particular those interested in the investment topic, the answer regarding the investment portfolio may be considered useful, not as a "bonus" to the answer regarding the basic topic, and as such liked. I encourage the other FOUNDERERS to express their views on my thoughts and conclusions. I would also like to get to know other points of view on the points I have raised.
Show original content

9 users upvote it!

2 answers