Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hdfs部署完成,但是使用shell命令后感觉有问题 #34

Open
lvzhaoxing opened this issue Oct 16, 2014 · 20 comments
Open

hdfs部署完成,但是使用shell命令后感觉有问题 #34

lvzhaoxing opened this issue Oct 16, 2014 · 20 comments

Comments

@lvzhaoxing
Copy link

执行[root@master client]# ./deploy shell hdfs dptst-ir dfs -ls /
结果如下,感觉相当不对

[root@master client]# ./deploy shell hdfs dptst-ir dfs -ls /
14/10/16 15:20:14 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 22 items
-rw-r--r--   1 root root          0 2014-10-15 16:07 /.autofsck
-rw-r--r--   1 root root          0 2014-10-13 11:50 /.autorelabel
dr-xr-xr-x   - root root       4096 2014-10-14 04:14 /bin
dr-xr-xr-x   - root root       4096 2014-10-13 15:21 /boot
drwxr-xr-x   - root root       3380 2014-10-15 16:07 /dev
drwxr-xr-x   - root root       4096 2014-10-15 16:07 /etc
drwxr-xr-x   - root root       4096 2014-10-15 17:19 /home
dr-xr-xr-x   - root root       4096 2014-06-10 10:14 /lib
dr-xr-xr-x   - root root      12288 2014-10-14 04:14 /lib64
drwx------   - root root      16384 2014-06-10 10:09 /lost+found
drwxr-xr-x   - root root       4096 2011-09-23 19:50 /media
drwxr-xr-x   - root root       4096 2011-09-23 19:50 /mnt
drwxr-xr-x   - root root       4096 2014-06-10 10:14 /opt
dr-xr-xr-x   - root root          0 2014-10-16 00:07 /proc
dr-xr-x---   - root root       4096 2014-10-16 15:18 /root
dr-xr-xr-x   - root root      12288 2014-10-14 04:14 /sbin
drwxr-xr-x   - root root       4096 2014-06-10 10:10 /selinux
drwxr-xr-x   - root root       4096 2011-09-23 19:50 /srv
drwxr-xr-x   - root root          0 2014-10-16 00:07 /sys
drwxrwxrwt   - root root       4096 2014-10-16 15:20 /tmp
drwxr-xr-x   - root root       4096 2014-06-10 10:10 /usr
drwxr-xr-x   - root root       4096 2014-06-10 10:14 /var
@wuzesheng
Copy link
Contributor

这个是你本地的Filesystem的结构,要支持shell功能的话,需要你打一个patch给你的hadoop common: https://issues.apache.org/jira/browse/HADOOP-9223

@lvzhaoxing
Copy link
Author

对了,大神,那个start.sh没找到在哪里

@wuzesheng
Copy link
Contributor

在程序的run_dir下,比如zookeeper: $HOME/app/zookeeper/dptst/zookeeper/

@lvzhaoxing
Copy link
Author

看那个补丁针对2.0.0-alpha,不过我安装的是hadoop-2.5.0-cdh5.2.0.tar.gz,可以适用吗?如果不大补丁的话,会影响hbase部署吗?

@wuzesheng
Copy link
Contributor

这个patch主要是UserGroupInformation.java这个类,这个类后续的版本基本都没怎么改过,应该可以直接打上去。这个修改主要是支持通过命令行参数来传递配置项,不影响其它服务的部署。

@wuzesheng
Copy link
Contributor

不打这个patch, shell命令不能正常使用,再没有其它影响。这个patch也主要是为了实现shell这个命令来搞的。

@lvzhaoxing
Copy link
Author

再问一个问题bootstrap后怎么卸载?

@wuzesheng
Copy link
Contributor

先stop, 然后cleanup

@lvzhaoxing
Copy link
Author

如果不打HADOOP-9223补丁的话,要怎么把文件弄到hdfs上?

@wuzesheng
Copy link
Contributor

用 pack命令打包,打好的包下面执行: bin/hdfs dfs -put xx xx

@lvzhaoxing
Copy link
Author

bin/hdfs dfs -ls / 执行的结果还是列出来的是本机的目录。这样是不是hdfs并没有正确运行?

@lvzhaoxing
Copy link
Author

试了一下,所有的 hdfs的shell操作,都是操作本地的文件系统,很奇怪

@wuzesheng
Copy link
Contributor

你贴一下pack出来的 etc/hadoop下的 core-site.xml和hdfs-site.xml里面的内容

@lvzhaoxing
Copy link
Author

packages/hdfs/dptst-example/current/etc/hadoop下的吗?这个目录底下core-site.xml和hdfs-site.xml里面的内容都是空的(configuration节点)。
app/hdfs/dptst-example/datanode(journalnode)这些目录下的core-site.xml和hdfs-site.xml才有内容。

@wuzesheng
Copy link
Contributor

不是,你用minos客户端的pack命令,可以打出一个包来,在minos/client/packages下面

@lvzhaoxing
Copy link
Author

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://dptst-example</value>
  </property>

  <property>
    <name>ha.zookeeper.quorum</name>
    <value>master:12000,slave1:12000,slave2:12000</value>
  </property>

  <property>
    <name>hadoop.http.staticuser.user</name>
    <value>hdfs</value>
  </property>

  <property>
    <name>hadoop.proxyuser.hue.groups</name>
    <value>*</value>
  </property>

  <property>
    <name>hadoop.proxyuser.hue.hosts</name>
    <value>*</value>
  </property>

  <property>
    <name>hadoop.security.authentication</name>
    <value>simple</value>
  </property>

  <property>
    <name>hadoop.security.authorization</name>
    <value>false</value>
  </property>

  <property>
    <name>hadoop.security.use-weak-http-crypto</name>
    <value>false</value>
  </property>

  <property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp/hadoop</value>
  </property>

  <property>
    <name>hue.kerberos.principal.shortname</name>
    <value>hue</value>
  </property>

  <property>
    <name>io.file.buffer.size</name>
    <value>131072</value>
  </property>

</configuration>

@lvzhaoxing
Copy link
Author

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

  <property>
    <name>dfs.block.access.token.enable</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.block.local-path-access.user</name>
    <value>work, hbase, hbase_srv, impala</value>
  </property>

  <property>
    <name>dfs.block.size</name>
    <value>128m</value>
  </property>

  <property>
    <name>dfs.client.failover.proxy.provider.dptst-example</name>
    <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
  </property>

  <property>
    <name>dfs.client.read.shortcircuit</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.client.read.shortcircuit.skip.auth</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.cluster.administrators</name>
    <value>hdfs_admin</value>
  </property>

  <property>
    <name>dfs.datanode.address</name>
    <value>0.0.0.0:12402</value>
  </property>

  <property>
    <name>dfs.datanode.balance.bandwidthPerSec</name>
    <value>10485760</value>
  </property>

  <property>
    <name>dfs.datanode.data.dir</name>
    <value>/mnt/data200/hdfs/data</value>
  </property>

  <property>
    <name>dfs.datanode.data.dir.perm</name>
    <value>700</value>
  </property>

  <property>
    <name>dfs.datanode.failed.volumes.tolerated</name>
    <value>0</value>
  </property>

  <property>
    <name>dfs.datanode.http.address</name>
    <value>0.0.0.0:12401</value>
  </property>

  <property>
    <name>dfs.datanode.ipc.address</name>
    <value>0.0.0.0:12400</value>
  </property>

  <property>
    <name>dfs.datanode.kerberos.principal</name>
    <value>hdfs_tst/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.datanode.keytab.file</name>
    <value>/etc/hadoop/conf/hdfs_tst.keytab</value>
  </property>

  <property>
    <name>dfs.datanode.max.xcievers</name>
    <value>4096</value>
  </property>

  <property>
    <name>dfs.ha.automatic-failover.enabled</name>
    <value>true</value>
  </property>

  <property>
    <name>dfs.ha.fencing.methods</name>
    <value>sshfence&#xA;shell(/bin/true)</value>
  </property>

  <property>
    <name>dfs.ha.fencing.ssh.connect-timeout</name>
    <value>2000</value>
  </property>

  <property>
    <name>dfs.ha.fencing.ssh.private-key-files</name>
    <value>/home/work/.ssh/id_rsa</value>
  </property>

  <property>
    <name>dfs.ha.namenodes.dptst-example</name>
    <value>host0,host1</value>
  </property>

  <property>
    <name>dfs.ha.zkfc.port</name>
    <value>12300</value>
  </property>

  <property>
    <name>dfs.hosts.exclude</name>
    <value>/home/bigdd/app/hdfs/dptst-example/namenode/excludes</value>
  </property>

  <property>
    <name>dfs.journalnode.edits.dir</name>
    <value>/home/bigdd/data/hdfs/dptst-example/journalnode</value>
  </property>

  <property>
    <name>dfs.journalnode.http-address</name>
    <value>0.0.0.0:12101</value>
  </property>

  <property>
    <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
    <value>HTTP/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.journalnode.kerberos.principal</name>
    <value>hdfs_tst/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.journalnode.keytab.file</name>
    <value>/etc/hadoop/conf/hdfs_tst.keytab</value>
  </property>

  <property>
    <name>dfs.journalnode.rpc-address</name>
    <value>0.0.0.0:12100</value>
  </property>

  <property>
    <name>dfs.namenode.handler.count</name>
    <value>64</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.dptst-example.host0</name>
    <value>master:12201</value>
  </property>

  <property>
    <name>dfs.namenode.http-address.dptst-example.host1</name>
    <value>slave2:12201</value>
  </property>

  <property>
    <name>dfs.namenode.kerberos.internal.spnego.principal</name>
    <value>HTTP/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.namenode.kerberos.principal</name>
    <value>hdfs_tst/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.namenode.keytab.file</name>
    <value>/etc/hadoop/conf/hdfs_tst.keytab</value>
  </property>

  <property>
    <name>dfs.namenode.name.dir</name>
    <value>/mnt/data200/hdfs/name</value>
  </property>

  <property>
    <name>dfs.namenode.replication.min</name>
    <value>1</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.dptst-example.host0</name>
    <value>master:12200</value>
  </property>

  <property>
    <name>dfs.namenode.rpc-address.dptst-example.host1</name>
    <value>slave2:12200</value>
  </property>

  <property>
    <name>dfs.namenode.safemode.threshold-pct</name>
    <value>0.99f</value>
  </property>

  <property>
    <name>dfs.namenode.shared.edits.dir</name>
    <value>qjournal://master:12100;slave1:12100/dptst-example</value>
  </property>

  <property>
    <name>dfs.namenode.upgrade.permission</name>
    <value>0777</value>
  </property>

  <property>
    <name>dfs.nameservices</name>
    <value>dptst-example</value>
  </property>

  <property>
    <name>dfs.permissions.enabled</name>
    <value>false</value>
  </property>

  <property>
    <name>dfs.permissions.superuser</name>
    <value>hdfs_admin</value>
  </property>

  <property>
    <name>dfs.permissions.superusergroup</name>
    <value>supergroup</value>
  </property>

  <property>
    <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
    <value>HTTP/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.secondary.namenode.kerberos.principal</name>
    <value>hdfs_tst/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.secondary.namenode.keytab.file</name>
    <value>/etc/hadoop/conf/hdfs_tst.keytab</value>
  </property>

  <property>
    <name>dfs.web.authentication.kerberos.keytab</name>
    <value>/etc/hadoop/conf/hdfs_tst.keytab</value>
  </property>

  <property>
    <name>dfs.web.authentication.kerberos.principal</name>
    <value>HTTP/hadoop@EXAMPLE.HADOOP</value>
  </property>

  <property>
    <name>dfs.web.ugi</name>
    <value>hdfs,supergroup</value>
  </property>

  <property>
    <name>fs.permissions.umask-mode</name>
    <value>022</value>
  </property>

  <property>
    <name>fs.trash.checkpoint.interval</name>
    <value>1440</value>
  </property>

  <property>
    <name>fs.trash.interval</name>
    <value>10080</value>
  </property>

  <property>
    <name>hadoop.security.group.mapping.file.name</name>
    <value>/home/bigdd/app/hdfs/dptst-example/namenode/hadoop-groups.conf</value>
  </property>

  <property>
    <name>ignore.secure.ports.for.testing</name>
    <value>true</value>
  </property>

  <property>
    <name>net.topology.node.switch.mapping.impl</name>
    <value>org.apache.hadoop.net.TableMapping</value>
  </property>

  <property>
    <name>net.topology.table.file.name</name>
    <value>/home/bigdd/app/hdfs/dptst-example/namenode/rackinfo.txt</value>
  </property>

</configuration>

@wuzesheng
Copy link
Contributor

你的master, slave1, slave2这些名字有正确配hosts么?

@lvzhaoxing
Copy link
Author

有的,这个你放心,都赔了hosts,ping也能通

@wuzesheng
Copy link
Contributor

看配置没啥问题啊,你可以telent master 12100么?在minos 客户端所在的机器

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants