This article takes you through an example how to query, transform and visualize data from social media. We are going to collect tweets from twitter, store them in HDFS (Hadoop distributed file system) and use JAQL and Java MapReduce application to manipulate and transform the data. Finally, we will visualize the results using a spreadsheet-style tool.
Motivation & Understanding the sample scenario
Very briefly – I’m going to search for tweets containing words “IBM” and “BigData”. After I collect them in JSON format, I use JAQL to transform them into a simpler structure and store them into HDFS as comma-delimited file. Then I run Java MapReduce program that counts the occurrence of each single word in all tweets. Finally I use BigSheets to visualize my analysis. Here is the outcome of the analysis:
Background
In this example, I’m using IBM’s Hadoop enterprise distribution InfoSphere BigInsights 2.1 (you can download quick start edition for free here). IBM InfoSphere BigInsights brings the power of Hadoop to the enterprise. Apache Hadoop is the open source software framework, used to reliably managing large volumes of structured and unstructured data (for more information about Hadoop see the article Big Data in Hadoop – How? What is it?). BigInsights makes it simpler for people to use Hadoop and build big data applications. It enhances this open source technology to withstand the demands of your enterprise, adding administrative, discovery, development, provisioning, and security features, along with best-in-class analytical capabilities from IBM Research. The result is that you get a more developer and user-friendly solution for complex, large scale analytics.
For transforming and storing twitter data, I use Jaql (which is incorporated within BigInsights). Jaql is one of the languages that helps to abstract complexities of MapReduce programming framework within Hadoop. It’s a loosely typed functional language with lazy evaluation (it means that Jaql functions are not materialized until they are needed). Jaql’s data model is based on JSON Query Language, it’s a fully expressive programming language (compared to Pig and Hive which are query languages), it elegantly handles deeply nested semi-structured data and can even deal with heterogeneous data. You can read my brief introduction to JAQL.
As a file system, I use HDFS which is also a part of BigInsights. HDFS is a distributed, scalable, Java-based file system that allows to store large volumes of unstructured data. You can find more information in my article Hadoop Distributed File System (HDFS).
For the graphical visualizations and final data manipulations, I use BigSheets. BigSheets is a spreadsheet-style tool provided with BigInsights that allows you to use standard spreadsheets functions, write your own macros, join tables, filter data, sort data, visualize data in graphs, and so on.
My working environment
- Red Hat Enterprise Linux 6.3 (64-bit) in VMWare (check BigInsights system requirements)
- IBM InfoSphere BigInsights 2.1 Enterprise Edition (you can obtain Quick Start Edition for free, article about how to install BigInsights to VMWare)
- IBM Rational Team Concert 4 (Eclipse 3.6 version which can be downloaded for free from jazz.net) with BigInsights plugin (instruction how to install the plugin is available after you install BigInsights at the web console welcome page)
Step 1: Collecting sample data
In this step I’m going to collect tweets using Twitter’s REST API v1.1, specifically GET search/tweets resource. Twitter’s REST-based search API allows you to collect a small amount of data (max 100 tweets per search request). A production application would likely use Twitter’s Streaming API to obtain large volumes of data, but I will keep it simple this time. I’m going to cover Twitter’s Streaming API together with BigInsights Stream in another article.
As you probably know, Twitter’s API returns data as JSON. I’m going to collect tweets which contains two keywords “IBM” and “BigData”. To do so, I need to call REST API:
https://api.twitter.com/1.1/search/tweets.json?q=IBM%2BWatson&count=100
If you type this address in your browser, you will find out that authentication failed. To start collecting data from Twitter, you need to get your OAuth tokens. To do so, follow this process (if you already don’t have your OAuth tokens):
- Login to https://dev.twitter.com and go to your applications.
- Click on “Create a new application”. Fill all the required information.
- Click on “OAuth tool” tab.
- Scroll down and use “Request Settings” to generate OAuth signature. Fill the form as shown on the picture below (you can change it for your purposes).
- Save the generated Authorization header.
After you get your Authorization Header, you can finally collect the tweets. There are many ways how to do it. I’m using linux wget
command. You just need to plug in your Authorization Header after the --header
argument. Also check URI in the end. Running following command in linux shell collects tweets containing both keywords “IBM” and “BigData” and stores them into /home/biadmin/Documents/tweets.json
.
--header 'Authorization: OAuth oauth_consumer_key="XXXXXXXXXXXXXXX", oauth_nonce="XXXXXXXXXXXXXXXXXXXXXXXXXXX", oauth_signature="XXXXXXXXXXXXXXXXXXXXXXXXXXXXX", oauth_signature_method="HMAC-SHA1", oauth_timestamp="1373678823", oauth_token="XXXXXXXXXXXXXXXXXXXXXXXXXXXX", oauth_version="1.0"'
--no-check-certificate 'https://api.twitter.com/1.1/search/tweets.json?q=IBM%2BBigData&count=100'
Here is a shortened example of JSON structure with tweets you obtain from Twitter (whole structure available here):
"statuses":[
{
"metadata":{
"result_type":"recent",
"iso_language_code":"pt"
},
"created_at":"Mon Jul 15 03:41:29 +0000 2013",
"id":356619501464322049,
"id_str":"356619501464322049",
"text":"Metade das empresas brasileiras est\u00e1 planejando Big Data, diz IBM #bigdata http:\/\/t.co\/5d4D8hblfU via @computerworldbr",
"source":"\u003ca href=\"http:\/\/twitter.com\/tweetbutton\" rel=\"nofollow\"\u003eTweet Button\u003c\/a\u003e",
"truncated":false,
"in_reply_to_status_id":null,
"in_reply_to_status_id_str":null,
"in_reply_to_user_id":null,
"in_reply_to_user_id_str":null,
"in_reply_to_screen_name":null,
"user":{
"id":1070062940,
"id_str":"1070062940",
"name":"agenor neto",
"screen_name":"netoage",
...
},
"geo":null,
...
"lang":"pt"
},
{
"metadata":{
"result_type":"recent",
"iso_language_code":"en"
},
"created_at":"Mon Jul 15 03:03:02 +0000 2013",
"id":356609826089992192,
"id_str":"356609826089992192",
"text":"RT @IBMbigdata: Unleash your data with #BigInsights Quick Start: #Hadoop 4 Enterprise http:\/\/t.co\/TPEY2u3g2r #bigdata #bigdatamgmt",
"source":"\u003ca href=\"http:\/\/www.tweetdeck.com\" rel=\"nofollow\"\u003eTweetDeck\u003c\/a\u003e",
"truncated":false,
"in_reply_to_status_id":null,
"in_reply_to_status_id_str":null,
"in_reply_to_user_id":null,
"in_reply_to_user_id_str":null,
"in_reply_to_screen_name":null,
"user":{
"id":385493059,
"id_str":"385493059",
"name":"Phil Grennan",
"screen_name":"PhilGrennan_Pro",
...
},
"geo":null,
...
"lang":"en"
}
],
"search_metadata":{
"completed_in":0.097,
"max_id":356619501464322049,
"max_id_str":"356619501464322049",
"next_results":"?max_id=356029023236784127&q=IBM%2Bbigdata&count=2&include_entities=1",
"query":"IBM%2Bbigdata",
"refresh_url":"?since_id=356619501464322049&q=IBM%2Bbigdata&include_entities=1",
"count":2,
"since_id":0,
"since_id_str":"0"
}
}
Step 2: Processing and transforming data with JAQL
Now we have our tweets collected in JSON format and stored in /home/biadmin/Documents/tweets.json
. Next thing we are going to do is to transform them in a simpler structure and store them in HDFS as comma-delimited file. We are going to use JAQL which is provided with BigInsights. If you are not familiar with Jaql, you can check my brief introduction to JAQL. There are several ways how to use JAQL:
- Jaql plugin for Eclipse (instruction how to install BigInsights plugin to Eclipse is available at welcome page of BigInsights web console – Jaql eclipse plugin is not supported on Windows at the time of publishing this article)
- Jaql shell (a command-line interface which can be launched from
$BIGINSIGHTS_HOME/jaql/bin/jaqlshell
) - Jaql ad hoc query application accessible through the BigInsights web console (must be first deployed)
- Jaql web server, which allows executing Jaql scripts via REST API calls
- Jaql API for embedding Jaql in a Java program
The easiest and most straightforward way is to use Jaql shell. Just run this script in the terminal $BIGINSIGHTS_HOME/jaql/bin/jaqlshell
and you can start writing your JAQL statements. However, I’m going to use Eclipse with BigInsights plugin. How to setup your Eclipse environment:
- Jaql eclipse plugin works only under Linux environment (at the time of writing this article). Download and install Eclipse 3.6. (version 3.6 is the only one supported with BigInsights 2.1).
- Go to your BigInsights web console (typically at
http://localhost:8080/
) and at the Welcome page click on “Enable your Eclipse development environment for BigInsights application development”.
- Then follow the installation instructions.
- Next step is to run Eclipse, switch to BigInsights perspective and set up a connection with your BigInsights server. Then you are ready to write your BigInsights applications.
Ok, our IDE environment is ready. Lets start with JAQL and write a script that loads our tweets, extracts data we consider important, and saves the results to HDFS. To do so, create a BigInsights Project within Eclipse and create a new Jaql script file in it. Here is the script that creates 2 files (one comma-delimited file and one hadoop sequence file which contains just the content of tweets).
tweets = tweets.statuses -> transform {
$.created_at,
tweet_id: $.id_str,
$.geo,
$.coordinates,
$.location,
user_id: $.user.id_str,
user_name: $.user.name,
user_screen_name: $.user.screen_name,
user_location: $.user.location,
user_description: $.user.description,
user_url: $.user.url,
user_followers_count: $.user.followers_count,
user_friends_count: $.user.friends_count,
$.retweet_count,
$.favorite_count,
$.lang,
text: $.text };
tweets -> write(del("/user/root/tweets.del", schema = schema {
created_at,
tweet_id,
geo,
coordinates,
location,
user_id,
user_name,
user_screen_name,
user_location,
user_description,
user_url,
user_followers_count,
user_friends_count,
retweet_count,
favorite_count,
lang,
text }));
tweets_text = tweets -> transform $.text;
tweets_text -> write(seq("/user/root/content/tweets_text.seq"));
The script can be run directly from the Eclipse.
Here is a very brief explanation what does the script do. You can learn more about JAQL in my article JAQL in Hadoop – a brief introduction or from the official reference guide.
jaqlGet()
loads JSON text file (in our example from local file system). More information about reading and writing JSON files is available here.tweets.statuses -> transform ...
take original JSON structure and transform it to the simpler onetweets -> write(del("/user/root/tweets.del", schema = schema {...
writes the new structure to a comma-delimited file (in our example to a file in HDFS)tweets -> transform $.text;
extracts just the content of tweets (text part)tweets_text -> write(seq("/user/root/content/tweets_text.seq"));
stores the extracted text content of tweets into a hadoop sequence file (again in this example to HDFS file system)
If you want to see the content of the tweets
variable, just run this statement:
You should get an output like this:
{
"created_at": "Mon Jul 15 03:41:29 +0000 2013",
"tweet_id": "356619501464322049",
"geo": null,
"coordinates": null,
"user_id": "1070062940",
"user_name": "agenor neto",
"user_screen_name": "netoage",
"user_location": "",
"user_description": "student of Information Systems, junior entrepreneur and pernambucano proudly.",
"user_url": null,
"user_followers_count": 17,
"user_friends_count": 48,
"retweet_count": 0,
"favorite_count": 0,
"lang": "pt",
"text": "Metade das empresas brasileiras está planejando Big Data, diz IBM #bigdata http://t.co/5d4D8hblfU via @computerworldbr"
},
{
"created_at": "Mon Jul 15 03:03:02 +0000 2013",
"tweet_id": "356609826089992192",
"geo": null,
"coordinates": null,
"user_id": "385493059",
"user_name": "Phil Grennan",
"user_screen_name": "PhilGrennan_Pro",
"user_location": "US, Fort Mill, SC",
"user_description": "Smarter Analytics Geek-Banking-IBM, Be Happy, then set a goal! Dont set a goal to be happy! (Stolen), Opinions expressed are my own :)",
"user_url": null,
"user_followers_count": 61,
"user_friends_count": 162,
"retweet_count": 0,
"favorite_count": 0,
"lang": "en",
"text": "RT @IBMbigdata: Unleash your data with #BigInsights Quick Start: #Hadoop 4 Enterprise http://t.co/TPEY2u3g2r #bigdata #bigdatamgmt"
},
...
]
And this is the output of /user/root/tweets.del
:
"Mon Jul 15 03:03:02 +0000 2013","356609826089992192",,,,"385493059","Phil Grennan","PhilGrennan_Pro","US, Fort Mill, SC","Smarter Analytics Geek-Banking-IBM, Be Happy, then set a goal! Dont set a goal to be happy! (Stolen), Opinions expressed are my own :)",,61,162,0,0,"en","RT @IBMbigdata: Unleash your data with #BigInsights Quick Start: #Hadoop 4 Enterprise http://t.co/TPEY2u3g2r #bigdata #bigdatamgmt"
...
The second file we created (/user/root/content/tweets_text.seq
) is a sequence file. SequenceFile is a flat file consisting of binary key/value pairs. It is extensively used in MapReduce as input/output formats. This file contains just text parts of the tweets and we are going to process it by Java MapReduce program that counts occurrences of each word in all tweets. Then we are going to display the most frequently used words in a tag cloud.
Step 3: Counting word occurrences with a Java MapReduce program
This step is optional. Before we are going to work with BigSheets and visualize data we extracted, I would like to demonstrate you very simple Java MapReduce application that counts occurrences of each word in all tweets. The application we are going to use is called WordCount and is a part of Hadoop distribution. However, BigInsights is bundled with Hadoop 1.1.1 which contains WordCount version 1.0. Since that version is very simple, I’m going to download the source code of WordCount v2.0 and run this application against our tweets’ texts. Compared to version 1.0, version 2.0 provides an option to filter inadequate characters and transform all words to lowercase.
First step is to create a BigInsights project. Then in the folder “src” create a new package called “mypackage” and finally a new Java Class file “WordCount.java” and copy following WordCount v2.0 source code into it (you can also download the source code in the text file WordCount.java.txt):
import java.io.*;
import java.util.*;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.filecache.DistributedCache;
import org.apache.hadoop.conf.*;
import org.apache.hadoop.io.*;
import org.apache.hadoop.mapred.*;
import org.apache.hadoop.util.*;
public class WordCount extends Configured implements Tool {
public static class Map extends MapReduceBase implements Mapper<LongWritable, Text, Text, IntWritable> {
static enum Counters { INPUT_WORDS }
private final static IntWritable one = new IntWritable(1);
private Text word = new Text();
private boolean caseSensitive = true;
private Set<String> patternsToSkip = new HashSet<String>();
private long numRecords = 0;
private String inputFile;
public void configure(JobConf job) {
caseSensitive = job.getBoolean("wordcount.case.sensitive", true);
inputFile = job.get("map.input.file");
if (job.getBoolean("wordcount.skip.patterns", false)) {
Path[] patternsFiles = new Path[0];
try {
patternsFiles = DistributedCache.getLocalCacheFiles(job);
} catch (IOException ioe) {
System.err.println("Caught exception while getting cached files: " + StringUtils.stringifyException(ioe));
}
for (Path patternsFile : patternsFiles) {
parseSkipFile(patternsFile);
}
}
}
private void parseSkipFile(Path patternsFile) {
try {
BufferedReader fis = new BufferedReader(new FileReader(patternsFile.toString()));
String pattern = null;
while ((pattern = fis.readLine()) != null) {
patternsToSkip.add(pattern);
}
} catch (IOException ioe) {
System.err.println("Caught exception while parsing the cached file '" + patternsFile + "' : " + StringUtils.stringifyException(ioe));
}
}
public void map(LongWritable key, Text value, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
String line = (caseSensitive) ? value.toString() : value.toString().toLowerCase();
for (String pattern : patternsToSkip) {
line = line.replaceAll(pattern, "");
}
StringTokenizer tokenizer = new StringTokenizer(line);
while (tokenizer.hasMoreTokens()) {
word.set(tokenizer.nextToken());
output.collect(word, one);
reporter.incrCounter(Counters.INPUT_WORDS, 1);
}
if ((++numRecords % 100) == 0) {
reporter.setStatus("Finished processing " + numRecords + " records " + "from the input file: " + inputFile);
}
}
}
public static class Reduce extends MapReduceBase implements Reducer<Text, IntWritable, Text, IntWritable> {
public void reduce(Text key, Iterator<IntWritable> values, OutputCollector<Text, IntWritable> output, Reporter reporter) throws IOException {
int sum = 0;
while (values.hasNext()) {
sum += values.next().get();
}
output.collect(key, new IntWritable(sum));
}
}
public int run(String[] args) throws Exception {
JobConf conf = new JobConf(getConf(), WordCount.class);
conf.setJobName("wordcount");
conf.setOutputKeyClass(Text.class);
conf.setOutputValueClass(IntWritable.class);
conf.setMapperClass(Map.class);
conf.setCombinerClass(Reduce.class);
conf.setReducerClass(Reduce.class);
conf.setInputFormat(TextInputFormat.class);
conf.setOutputFormat(TextOutputFormat.class);
List<String> other_args = new ArrayList<String>();
for (int i=0; i < args.length; ++i) {
if ("-skip".equals(args[i])) {
DistributedCache.addCacheFile(new Path(args[++i]).toUri(), conf);
conf.setBoolean("wordcount.skip.patterns", true);
} else {
other_args.add(args[i]);
}
}
FileInputFormat.setInputPaths(conf, new Path(other_args.get(0)));
FileOutputFormat.setOutputPath(conf, new Path(other_args.get(1)));
JobClient.runJob(conf);
return 0;
}
public static void main(String[] args) throws Exception {
int res = ToolRunner.run(new Configuration(), new WordCount(), args);
System.exit(res);
}
}
Here is the WordCount.java file in my Eclipse:
The WordCount v2.0 is a very simple Java application that harness the power of MapReduce. If you aren’t familiar with MapReduce, I recommend you to visit BigDataUniversity.com and enroll in one of the courses about Hadoop fundamentals (it’s completely for free). Our WordCount application takes several arguments:
-Dwordcount.case.sensitive=false
says whether processing should be case sensitivepath/to/input/directory
specifies the input directory from which ALL files would be taken into processingpath/to/output/directory
specifies the output directory where results will be stored-skip /user/rood/patterns.txt
specifies the file that contains characters/words to skip (Java regular expressions can be used)
Before we run the application and process the file containing text parts of the tweets /user/root/content/tweets_text.seq
, we need to do 2 things. First one is to export our WordCount application to a JAR file. To do so, follow these steps:
- Right click the Eclipse project and choose “Export”.
- Select Java -> JAR file and click Next.
- Check both “Export generated class files and resources” and “Export Java source files and resources”. Specify the export destination (for example
/home/biadmin/Documents/WordCount2.jar
). Click Next. - On the next page leave everything default and click Next.
- On the last page set the Main class to be “mypackage.WordCount” and click Finish.
The second thing we need to do is to create a file with patterns that would be skipped when processing the words. This file must be then stored in HDFS. To do so, lets follow these steps:
- Create a new file:
touch /home/biadmin/Documents/patterns.txt
- Edit the file:
vi /home/biadmin/Documents/patterns.txt
and specify characters you want to skip (in Java regular expression). Every line represents one expression. The sample file removing.,!:;@#'()
characters from words could look like this:\.
\,
\!
\:
\;
\@
\#
\'
\(
\) - Move the file to HDFS:
hadoop fs -put /home/biadmin/Documents/patters.txt /user/root/patterns.txt
If you are not familiar with HDFS, I recommend you to read my article about HDFS. - Now we have the pattern file stored in HDFS in the path
/user/root/patterns.txt
Now we are ready to run our Java MapReduce application within Hadoop. If you still remember, our input file tweets_text.seq
is stored in the folder /user/root/content
. We are going to save results into /user/root/results
(the output directory can’t exist before, otherwise the mapreduce job fails – to remove the directory use this command: hadoop fs -rmr /user/root/results
). Finally to run the application, execute this statement in your terminal:
-Dwordcount.case.sensitive=false /user/root/content /user/root/results
-skip /user/root/patterns.txt
If everything goes well, your terminal should look like this:
The result is stored in HDFS in a location similar to /user/root/results/part-XXXXX
. You can check the content of folder by running hadoop fs -ls /user/root/results
. You can then view the result file by running this command: hadoop fs -cat /user/root/results/part-XXXXX
. As you will find out, the file contains tab-separated data in the format WORD<tab>number of occurrences
.
Other alternative how to look at the results is to use BigInsights web console. Go to http://yourserver:8080/
and from the menu click on “Files”. Then navigate to your file through the folder structure. You should see something similar to this:
Step 4: Visualizating outcomes with BigSheets
This part is coming in a separate article How to visualize BIG data with InfoSphere BigSheets.
Conclusion
This article should serve as a very simple example how to collect, analyze and visualize data from social media like twitter. It’s not definitely a best practice. To use all features of BigInsights, the steps 1, 2 and 3 could be merged into one Java application that could be deployed to BigInsights server and scheduled for automatic runs.
Follow Ups
1. An article about data exploration and visualizations in BigSheets (done)
2. An article about how to do sentiment analysis with BigInsights
3. An article about how to collect data from streams
4. An article about how to use visualizations with InfoSphere Data Explorer
Resources
- IBM InfoSphere BigInsights 2.1 Information center
- Presentation Hadoop scripting with JAQL at Innovate 2013 Conference (not available online)
- developerWorks: Analyzing social media and structured data with InfoSphere BigInsights
- developerWorks: Query social media and structured data with InfoSphere BigInsights
- Hadoop Map/Reduce Tutorial
- Twitter Developers
Thanks for providing such a helpful and detailed example, especially the part for demonstrating how to work with Twitter’s new version of REST API (v1.1). I am looking forward to your next article about collecting data from streams, hopefully, data collected by Streams from some real-time data source can directly sink into Hadoop for further analysis in real-time.
Perfect, Very usefull Article
sudo wget -O /home/tweets.json –header=’Authorization: OAuth oauth_consumer_key=”JHHWxQ8FJMCha4VCxYK02w”, oauth_nonce=”80afb4dae06a48a334de38938a69ef31″, oauth_signature=”Qfyc4nZhrR9S6MMEWNc32JJZU6s%3D”, oauth_signature_method=”HMAC-SHA1″, oauth_timestamp=”1393407959″, oauth_token=”2311270080-GcAt0LFgbOMDFcE0ftlwh5HLkn39ZdmPPldj5Kv”, oauth_version=”1.0″‘ –no-check-certificate ‘https://api.twitter.com/1.1/search/tweets.json?q=IBM%2BBigData&count=100′
–2014-02-26 10:53:08– https://api.twitter.com/1.1/search/tweets.json?q=IBM%2BBigData&count=100
Resolving api.twitter.com (api.twitter.com)… 199.59.149.232, 199.59.148.87, 199.59.149.199
Connecting to api.twitter.com (api.twitter.com)|199.59.149.232|:443… connected.
HTTP request sent, awaiting response… 401 Unauthorized
Authorization failed.
//please help me to resolve this
I also got this problem radha
Resolving api.twitter.com… 199.16.156.104
Connecting to api.twitter.com|199.16.156.104|:443… connected.
HTTP request sent, awaiting response…
HTTP/1.0 401 Unauthorized
content-length: 61
content-type: application/json; charset=utf-8
date: Fri, 09 May 2014 19:21:11 UTC
server: tfe
set-cookie: guest_id=v1%3A139966327167889068; Domain=.twitter.com; Path=/; Expires=Sun, 08-May-2016 19:21:11 UTC
strict-transport-security: max-age=631138519
x-tfe-logging-request-category: API
Connection: keep-alive
Authorization failed.
//any help?
Hi,
Matous just helped me out, The problem I faced I solved changing the query on wget command and also when I generate a new oauth I used the same query.
any doubts just send me an e-mail
cbiancp@br.ibm.com
Hello,
thanks for this great tuto!
I want to add more regex to skip stopwords from tweets.
For example: \s{1}+[an]+\s{1}
it woks fine, but all replaced matches are replace with nothing so the words are not separated anymore
i can’t find how to add a space to separate the words in WC script.
Can you help please?
Hi, I have got this problem, can you help me?
wget –proxy-user=***** –proxy-password=******* -O /home/tweets.json -S –header ‘Authorization: OAuth oauth_consumer_key=”wDpcK6zcH7LfLlLNT33djKlmI”, oauth_nonce=”dd6b71483952c7caad3f966045241ff5″, oauth_signature=”ttQhaH96YKw0RRpJvh8t4N2Cmec%3D”, oauth_signature_method=”HMAC-SHA1″, oauth_timestamp=”1406118788″, oauth_token=”155952777-qvM2nRrNSfZMwUxTv6LNg3SsPJh2QuHDlvSHYWh5″, oauth_version=”1.0″‘ –no-check-certificate ‘https://api.twitter.com/1.1/search/tweets.json?q=formula 1 2011 monaco&src=typd’
–2014-07-23 15:09:18– https://api.twitter.com/1.1/search/tweets.json?q=formula%201%202011%20monaco&src=typd
Connecting to ********… connected.
Proxy request sent, awaiting response…
HTTP/1.0 401 Unauthorized
content-length: 63
content-type: application/json; charset=utf-8
date: Wed, 23 Jul 2014 13:09:18 UTC
server: tfe
set-cookie: guest_id=v1%3A140612095856463069; Domain=.twitter.com; Path=/; Expires=Fri, 22-Jul-2016 13:09:18 UTC
strict-transport-security: max-age=631138519
Authorization failed.
Please provide complete source code as downloadable .Thanks. Great tut!