[root@master ~]# hadoop --helpUsage: hadoop [--config confdir] COMMAND where COMMAND is one of:fs run a generic filesystem user clientversionprint the versionjar <jar>run a jar filechecknative [-a|-h]check native hadoop and compression libraries availabilitydistcp <srcurl> <desturl> copy file or directories recursivelyarchive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archiveclasspathprints the class path needed to get the Hadoop jar and the required librariesdaemonlogget/set the log level for each daemon orCLASSNAMErun the class named CLASSNAMEMost commands print help when invoked w/o parameters.
查看版本
[root@master ~]# hadoop versionHadoop 2.2.0.2.0.6.0-101Subversion git@github.com:hortonworks/hadoop.git -r b07b2906c36defd389c8b5bd22bebc1bead8115bCompiled by jenkins on 2014-01-09T05:18ZCompiled with protoc 2.5.0From source with checksum 704f1e463ebc4fb89353011407e965This command was run using /usr/lib/hadoop/hadoop-common-2.2.0.2.0.6.0-101.jar
运行jar文件
[root@master liguodong]# hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples-2.2.0.2.0.6.0-101.jar pi 10 100Number of Maps= 10Samples per Map = 100Wrote input for Map #0Wrote input for Map #1Wrote input for Map #2...Job Finished in 19.715 secondsEstimated value of Pi is 3.14800000000000000000
[root@master /]# hdfs--helpUsage: hdfs [–config confdir] COMMAND where COMMAND is one of: dfs run a filesystem command on the file systems supported in Hadoop. namenode -format format the DFS filesystem secondarynamenode run the DFS secondary namenode namenode run the DFS namenode journalnode run the DFS journalnode zkfc run the ZK Failover Controller daemon datanode run a DFS datanode dfsadmin run a DFS admin client haadmin run a DFS HA admin client fsck run a DFS filesystem checking utility balancer run a cluster balancing utility jmxget get JMX exported values from NameNode or DataNode. oiv apply the offline fsimage viewer to an fsimage oev apply the offline edits viewer to an edits file fetchdt fetch a delegation token from the NameNode getconf get config values from configuration groups get the groups which users belong to snapshotDiff diff two snapshots of a directory or diff the current directory contents with a snapshot lsSnapshottableDir list all snapshottable dirs owned by the current user Use -help to see options portmap run a portmap service nfs3 run an NFS version 3 gateway
校验检查某个目录是否健康
[root@master liguodong]# hdfs fsck /liguodongConnecting to namenode via http://master:50070FSCK started by root (auth:SIMPLE) from /172.23.253.20 for path /liguodong at Wed Jun 03 10:43:41 CST 2015...........Status: HEALTHY Total size:1559 B Total dirs:7 Total files: 11 Total symlinks:0 Total blocks (validated):7 (avg. block size 222 B)...The filesystem under path "/liguodong" is HEALTHY
[root@master liguodong]# yarn --helpUsage: yarn [--config confdir] COMMANDwhere COMMAND is one of:resourcemanagerrun the ResourceManagernodemanagerrun a nodemanager on each slavermadminadmin toolsversionprint the versionjar <jar>run a jar fileapplicationprints application(s) report/kill applicationnode prints node report(s)logs dump container logsclasspathprints the class path needed to get the Hadoop jar and the required librariesdaemonlogget/set the log level for each daemon orCLASSNAMErun the class named CLASSNAMEMost commands print help when invoked w/o parameters. Ubuntu14.04下Hadoop2.4.1单机/伪分布式安装配置教程 http://www.linuxidc.com/Linux/2015-02/113487.htmCentOS安装和配置Hadoop2.2.0 http://www.linuxidc.com/Linux/2014-01/94685.htmUbuntu 13.04上搭建Hadoop环境 http://www.linuxidc.com/Linux/2013-06/86106.htmUbuntu 12.10 +Hadoop 1.2.1版本集群配置 http://www.linuxidc.com/Linux/2013-09/90600.htmUbuntu上搭建Hadoop环境(单机模式+伪分布模式) http://www.linuxidc.com/Linux/2013-01/77681.htmUbuntu下Hadoop环境的配置 http://www.linuxidc.com/Linux/2012-11/74539.htm单机版搭建Hadoop环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm更多Hadoop相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13本文永久更新链接地址:http://www.linuxidc.com/Linux/2015-06/118464.htm