C++14 integer max / small value

Table of IV 1. How to obtain the maximum value of 1.1. C++14 method 1.2. Achieve their maximum 1.3. How to achieve their own minimum 1 how to obtain the maximum value of STD:: 1.1 C++14: cout < < "int\t" Lowest () "'\t'" Max () "'\n'"; The output result is: -2147483648 int 2147483647 How to achieve their maximum value of 1.2 int...
read(401) comment(1)

C++14 significant optimization of memory allocation performance

Table of nature of the needs of the IV 1. 2. The existence of the problem. Solutions are briefly introduced in this paper. C++14 standard of memory optimization described modifications, let the compiler into memory allocation of similar TCMalloc optimization strategy of and not get bogged down in a new statement with the original, the distribution of a memory of the silly situation. So it is reasonable to believe that the C + + program compiled C++14 compiler in the memory allocation performance will improve more. Provide the following text source to the compiler clang document, I on the main content of the translation.
read(680) comment(3)

C++14 can directly write binary literal constants

Table of IV 1. Direct writing binary literal constant 1 direct writing binary literal constant c++14 brought 0b 0b or beginning said binary string literal way, a binary literals. follows code void WriteBinaryLiterals () { I size_t = 42; J size_t = 0b101010; The cout (I = = J) < endl; } Running knot...
read(585) comment(2)

C++ standard 14 cancel decltype calculation function return type

Table of IV 1. 11 before does not support auto keyword 2. 11 support auto keyword 2.1. But can not automatically infer function return type 2.2. Use - > decltype to declare a return type. C++14 makes things go back to before simple. We should use which version of the C + + 1 11 does not support auto keyword the following code in C + + 11 is does not support auto add (int a, int b) { I int =...
read(588) comment(0)

Emacs zoom in and out.

Emacs can view the default picture, but in the zoom in and out of the need to write additional configuration work need to install the image+ module, you can install the way through the package and add a line in the image+; init.el ('image eval-after-load '('image+ require) when you open the PNG picture, run the following command M-ximagex-auto-adjust-mode and refresh the bufferrevert-buff...
read(561) comment(0)

Basic atomic operation

A series of atoms of the table of IV 1. Processor supports a series of atomic operation 1.1. CAS (compare and Swap/Set) support 1.1.1 function prototype 1.1.2. Realizing logic 1.1.3. C + + 11 standard library 1.2. Fetch and Add1.3. test and Set1 processor 1.1 CAS (compare and Swap/Set) detailed reference a wiki, below is my understanding...
read(689) comment(0)

Using C++ to write the first example of thrift

Table of IV 1. Create thrift file 2. To generate C + + code 3. Write c + + client code 4. The example project 4.1. The project directory 4.2. Compile server 1 4.3. Compile the client 4.4. Run to create file thrift thrift is very simple, a WorkerManager provides the a ping method, let the client through the way of RPC remote invocation, Ping ICMP Protocol Simulation, take a look at the server is normal.# worker.thr...
read(706) comment(0)

Thrift 0.9.3 Ubuntu compile and install

Table of IV 1. Download thrift source code 2. Compiled and installed. Run the test program 4. Installation 1 download thrift source code git clone https://git-wip-us.apache.org/repos/asf/thrift.git thrift Checkout 0.9.3 Git 2 compile and install apt-get install on automake Ins apt-get...
read(744) comment(0)

Data analysis of the Tao and surgery

Table of IV 1. Data analysis and methods 1.1 1.2. Operation. "Intelligent data smart criteria" general chapter - the formulation of strategic intelligence 2.1. Intelligent strategy 2.1.1 dilemma always existing 2.1.2. Large companies and small companies 2.1.3. Regardless of the size of the enterprise, the establishment of business intelligence to start from a strategic 2.1.4. From the strategic objective of only production required the Tao and the operation data 2.1.5. smart strategy template 1 data analysis 1.1 Dow play data value of the overall logical thinking.
read(933) comment(1)

Spark-shell client settings

The spark cluster client set setup spark cluster is standalone clusters in the root account, from the spark cluster on the master copy spark1.5.2 installation to the directory, this configuration files have been copied. In fact, the main configuration of the zookeeper. Owner for root, group and other users are able to read and run drwxr-xr-x 14 root 4.0K Nov root 16 spar 11:48...
read(728) comment(0)

Clojure introduces another Java implementation of the clojure class

It's a bit convoluted, so it is. To use a Java class file core.clj inside, the Java class (MoveDailyAction) is compiled by the same project code clojure, clojure - > java class, please refer to the previous blog clojure Java classes are now required to put the class file for the clojure generated Java in the specified directory, and then let core.clj to refer to the class file. This is mainly by leini...
read(926) comment(0)

Java implementation class clojure

Why do you need to do this? Because clojure can call the Java class, but sometimes these Java classes need you to achieve a subclass or pass a custom Java object as a parameter, so you need to compile the clojure code into Java code. Remember before you mentioned: gen-class what, in (NS...), the use of (gen-class), in (ns..), it should be used (gen-class) below is an example: (kafka2hdfs.MoveDailyActi ns)...
read(746) comment(0)

SQL spark to create dataframes

Table of IV 1. Spark sql2. SQLContext2.1. SQL context is to spark the SQL all entry point 2.2. By spark context create SQL context more than SQL context2.3. hive context function, future SQL context also will increase function. DataFrames3.1. function 3.2. Create DataFrames3...
read(1043) comment(0)

Spark to load the JSON file from the HDFS file to the SQL table through the RDD file

RDD definition RDD full name is Distributed Dataset Resilient, is the core of the spark abstraction layer, through it can read a variety of documents, here demonstrates how to read the HDFS file. All spark work is to occur on the RDD, such as the creation of a new RDD, the conversion of existing RDD, the existing RDD calculation results. RDD in spark is an immutable (immutable) object collection, RDD can be divided into multiple partitions, stored in different nodes. There are two ways to create RDD,...
read(981) comment(0)

Removal of WARN util.NativeCodeLoader HDFS

Often encountered this warning # HDFS DFS -ls /input 10:00:32 WARN util.NativeCodeLoader: Unable load to 15/11/10 native-hadoop library for your platform using builtin-java classes where applicable question in where? Some people say that this is the precompiled package is hadoop...
read(860) comment(0)

Standalone Spark cluster installation

This article does not engage in what yarn mashup spark, just wanted to create a pure spark environment, too many layers of things mix together, do not fly. Running account to create a spark service account # useradd smilesmile is running the spark service account. Download and install the package under the root account, download the latest installation package, attention is not source, but bin installation package, support WGet hadoop2.6 after http://mirrors.cnnic.cn/apache/spark/s...
read(1442) comment(0)

HDFS user rights management

Before the super user can start the namenode service user is the super user specific configuration reference my blog Ubuntu use Hadoop 2. X a HDFS super users create and set the CentOS 6.6 installing HDFS 2.7.1 design idea of many atomic design POSIX file system, can refer to the previous blog Linux user management command files and...
read(826) comment(0)

Leiningen and maven

Leiningen is management clojure engineering, in fact, is the use of clojure administration clojure, project.clj is the main configuration file, the basic principle is based on project.clj file generation pom.xml and then use the Maven's powerful for project management. How to test project.clj is reasonable to use the command POM lein, will be based on the current directory project.clj files generated by pom.xml, and then through the check pom.xml...
read(884) comment(0)

Storm remote commit task

Remote job submission is very necessary, if there is a storm cluster for debugging (a bit of luxury, but necessary), from the local development machine directly submit topology. in the production environment, we usually by Jenkins compiled code and then submitted to the storm clusters, which is submitted to the remote a way. In this way, the development team will not directly deal with the Storm online environment, to ensure the safety of the cluster. The basic principle is that by running the local storm.jar and remote storm cluster of nimbus communication, the task (to...
read(966) comment(0)

Storm development series three Clojue write program to read the Kafka data and write to the HDFS

To say the storm is the most commonly used data source, nature is Kafka, storm is usually used to perform real-time statistics, but also incidentally from Kafka read data conveniently write the HDFS, according to my experience, which is almost must function. So this program is to achieve read Kafka data, and then write to hdfs. But the biggest difference is that this is the clojure version, not the Java version. The following illustrates the project configuration project.clj file contains the dependency: (Kafka defproject...
read(1002) comment(1)
1105 data a total of 56 pagesOne Two Three Four Five ... Next page Shadowe
    personal data
    • Visit3349547 times
    • Integral:Forty-four thousand six hundred and eleven
    • Grade
    • Rank:Forty-seventh name
    • Original1084 articles
    • Reproduced:21
    • Translation:1
    • Comments:1084 article
    Classification of articles
    Latest comments
    Expert friends of the blog