Transformations in Spatial Aptitude(General Aptitude for any competitive exam)

TRANSFORMATIONS In a plane, you can slide, flip, turn, enlarge, or reduce figures to create new figures. These corresponding figures are frequently designed into wallpaper borders, mosaics, and artwork. Each figure that you see will correspond to another figure. These corresponding figures are formed using transformations. A transformation maps an initial image, called a preimage, […]

Hadoop File systems and Interfaces in Big data

Hadoop has an abstract notion of filesystems, of which HDFS is just one implementation. The Java abstract class org.apache.hadoop.fs.FileSystem represents the client interface to a filesystem in Hadoop, and there are several concrete implementations. The main ones that ship with Hadoop are described in Table . Interfaces HTTP HTTP By exposing its filesystem interface as […]

Hadoop Basic File system Operations

Basic Filesystem operations The information returned is very similar to that returned by the Unix command ls l, with a few minor differences. The first column shows the file mode. The secord column is the replication factor of the file (something a traditional Unix filesystem do not have). Remember we set the default replication factor […]

Hadoop Distributed File System concepts Nodes, Block Caching, Federation

Blocks A disk has a block size, which is the minimum amount of data that it can read or write. Filesystems for a single disk build on this by dealing with data in blocks, which are an integral multiple of the disk block size. Filesystem blocks are typically a few kilobytes in size, whereas disk […]

The Hadoop Distributed Filesystem in Big Data

Hadoop comes with a distributed filesystem called HDFS, which stands for Hadoop Distributed Filesystem. The Design of HDFS HDFS is a filesystem designed for storing very large files with streaming data access patterns, running on clusters of commodity hardware. Very large files “Very large” in this context means files that are hundreds of megabytes, gigabytes, […]

What is Scaling Out and Data Flow Hadoop in Bigdata

Scaling Out Map Reduce works for small inputs; now it’s time to take a bird’s-eye view of the system and look at the data flow for large inputs. For simplicity, the examples so far have used files on the local filesystem. However, to scale out, we need to store the data in a distributed filesystem […]

Java MapReduce Hadoop in Bigdata

Java MapReduce Having run through how the MapReduce program works, the next step is to express it in code. We need three things: a map function, a reduce function, and some code to run the job. The map function is represented by the Mapper class, which declares an abstract map() method. Java MapReduce if (airTemperature […]

Analyzing the Data with Hadoop in Bigdata

To take advantage of the parallel processing that Hadoop provides, we need to expe our query as a MapReduce job. After some local, small-scale testing, we will be able to run it on a cluster of machines. MapReduce Map and Reduce MapReduce works by breaking the processing into two phases: the map phase and the […]

What is MapReduce in hadoop BigData

MapReduce is a programming model for data processing. The model is simple, yet not too simple to express useful programs in. Hadoop can run MapReduce programs written in various languages; we look at the same program expressed in Java. Ruby, and Python. Most importantly, MapReduce programs are inherently parallel, thus putting very large-scale data analysis […]

DOUBLE-ENDED QUEUE (DEQUE) IN DATASTRUCTURES

In DEQUE, insertion and deletion operations are performed at both ends of the Queue. Exceptional Condition of DEQUE Here insertion is allowed at one end and deletion is allowed at both ends. Deletion Insertion Front Rear Here insertion is allowed at both ends and deletion is allowed at one end. Insertion              Deletion Front                                   Rear Operations […]

Scroll to top