What Edited Retweets Reveal about Online Political Discourse

How widespread is the phenomenon of commenting or editing a tweet in the practice of retweeting by members of political communities in Twitter? What is the nature of comments (agree/disagree), or of edits (change audience, change meaning, curate content). Being able to answer these questions will provide knowledge that will help answering other questions such as: what are the topics, events, people that attract more discussion (in forms of commenting) or controversy (agree/disagree)? Who are the users who engage in the processing of curating content by inserting hashtags or adding links? Which political community shows more enthusiasm for an issue and how broad is the base of engaged users? How can detection of agreement/disagreement in conversations inform sentiment analysis - the technique used to make predictions (who will win an election) or support insightful analytics (which policy issue resonates more with constituents). We argue that is necessary to go beyond the much-adopted aggregate text analysis of the volume of tweets, in order to discover and understand phenomena at the level of single tweets. This becomes important in the light of the increase in the number of human-mimicking bots in Twitter. Genuine interaction and engagement can be better measured by analyzing tweets that display signs of human intervention. Editing the text of an original tweet before it is retweeted, could reveal mindful user engagement with the content, and therefore, would allow us to perform sampling among real human users. This paper presents work in progress that deals with the challenges of discovering retweets that contain comments or edits, and outlines a machine-learning based strategy for classifying the nature of such comments.