Assume you have a file named foo.txt in your local directory. You issue the following three
commands:
Hadoop fs –mkdir input
Hadoop fs –put foo.txt input/foo.txt
Hadoop fs –put foo.txt input
What happens when you issue the third command?
A.
The write succeeds, overwriting foo.txt in HDFS with no warning
B.
The file is uploaded and stored as a plain file named input
C.
You get a warning that foo.txt is being overwritten
D.
You get an error message telling you that foo.txt already exists, and asking you if you would like
to overwrite it.
E.
You get a error message telling you that foo.txt already exists. The file is not written to HDFS
F.
You get an error message telling you that input is not a directory
G.
The write silently fails
Correct answer is E.
Using the above command you cannot over write a file in hdfs.
But we can overwrite a files in hdfs forcefully using below command.
hadoop fs -put -f foo.txt input
0
0
Only E is the answer.
0
0
correct aster is E:
I’ve just tested:
root@slave01:~# hadoop fs -put foo.txt input
put: Target input/foo.txt already exists
0
0
E
0
0
E
0
0
E. tested
0
0
Answer is E. It will only tell you the file already exists
0
0
I have the same idea. CE
0
0