Now that we understand the details about Flink's architecture and its process model, it's time to get started with a quick setup and try out things on our own. Flink works on both Windows and Linux machines.
The very first thing we need to do is to download Flink's binaries. Flink can be downloaded from the Flink download page at: http://flink.apache.org/downloads.html.
On the download page, you will see multiple options as shown in the following screenshot:
In order to install Flink, you don't need to have Hadoop installed. But in case you need to connect to Hadoop using Flink then you need to download the exact binary that is compatible with the Hadoop version you have with you.
As I have latest version of Hadoop 2.7.0 installed with me, I am going to download the Flink binary compatible with Hadoop 2.7.0 and built on Scala 2.11.
Here is direct link to download:
http://www-us.apache.org/dist/flink/flink-1.1.4/flink-1.1.4-bin-hadoop27-scala_2.11.tgz
Flink needs Java to be installed first. So before you start, please make sure Java is installed. I have JDK 1.8 installed on my machine:
Flink installation is very easy to install. Just extract the compressed file and store it on the desired location.
Once extracted, go to the folder and execute start-local.bat
:
>cd flink-1.1.4 >bin\start-local.bat
And you will see that the local instance of Flink has started.
You can also check the web UI on http://localhost:8081/
:
You can stop the Flink process by pressing Cltr + C.
Similar to Windows, installing Flink on Linux machines is very easy. We need to download the binary, place it in a specific folder, extract, and finish:
$sudo tar -xzf flink-1.1.4-bin-hadoop27-scala_2.11.tgz $cd flink-1.1.4 $bin/start-local.sh
As in Windows, please make sure Java is installed on the machine.
Now we are all set to submit a Flink job. To stop the local Flink instance on Linux, execute following command:
$bin/stop-local.sh