联系方式

  • QQ:99515681
  • 邮箱:99515681@qq.com
  • 工作时间:8:00-21:00
  • 微信:codinghelp

您当前位置:首页 >> Python编程Python编程

日期:2024-05-18 07:59

Assignment 5

 Hadoop and Spark, both developed by the Apache Software Foundation, are

widely used open-source frameworks for big data architectures. Both Hadoop and

Spark enables big data processing tasks to be split into smaller tasks. The small

tasks are performed in parallel by using an algorithm (i.e., MapReduce), and are

then distributed across a Hadoop cluster.

 Spark tends to perform faster than Hadoop and it uses random access memory

(RAM) to cache and process data instead of a file system in Hadoop. This enables

Spark to handle use cases that Hadoop cannot.

 In this assignment, you will run both Hadoop and Spark on your own computer:

 Task 1: preprocess an input dataset using Hadoop

 Task 2 and Task 3: analyze the preprocessed dataset (the output of Task 1)

using SparkSetup Hadoop

 Because Hadoop is open source, you can download and install it (see the

Hadoop webpage) on your own computer!

 Hadoop Single Node Installation Reference:

https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoopcommon/SingleCluster.html

The conf/slaves file specifies the hostnames or IP addresses of all the

worker nodes. By default, it only contains localhost.

 Run the example WordCount application:

https://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoopmapreduce-client-core/MapReduceTutorial.html#Source_CodeExercise (Hadoop)

 Task 1: Preprocess data. Process the provided user query logs (search_data.sample).

Strip the clickUrls in the query log using Hadoop to leave only a specific part (the url

before the first ‘/’) of the clickUrls.

 Example input: google.com/docs/about/

 Example output: google.com

 You can start by modifying the WordCount application.

 The preprocessed search_data.sample is used as the input for the following two tasks.

3Setup Spark

 Apache Spark is an open-source unified analytics engine for large-scale

data processing. Spark provides an interface for programming clusters

with implicit data parallelism and fault tolerance.

 Download Spark: https://spark.apache.org/downloads.html

 Learn more about Spark: https://spark.apache.org/examples.html

 You need to analyze the user query logs of a search engine. Complete the

following two tasks:

 Task 2: Rank the tokens (e.g., blog and www) that appear most often

in the queried url.

 Task 3: Rank the time period(by minute) with the most queries.

4Setup pseudo-distributed Spark (cont.)

 Run a Spark cluster on your machine

 Start the master node and one worker node with Spark’s standalone mode

(Spark Standalone Mode).

 After starting the master node, you can check out master’s web UI at

http://localhost:8080 know the current setup

 Run the example application with Spark

https://spark.apache.org/docs/latest/submitting-applications.html

5Exercise (Spark)

 Task 2: Rank the tokens that appear most often in the queried url. Tokenlize

the clickUrls in the query log, then rank them according to the number of times they

appear. The output should be the top ten tokens and the number of times they

appear.

 Example output: (www, 4566) (question,743) (bbs,729) (blog,390)

 Task 3: Rank the time period (by minute) with the most queries. Count the

number of query at each minute, then rank them from more to less. The output

should be the top ten time period (by minute) with most queries and the number of

queries during that time period.

 Example output: (00:01,1045) (00:00,1043) (00:06,1033)

6Submission

 Submit all your source file(s) and a document. The document should

contain the screenshots of the running program and the output results.

7


版权所有:编程辅导网 2021 All Rights Reserved 联系方式:QQ:99515681 微信:codinghelp 电子信箱:99515681@qq.com
免责声明:本站部分内容从网络整理而来,只供参考!如有版权问题可联系本站删除。 站长地图

python代写
微信客服:codinghelp