You want to node to only swap Hadoop daemon data from RAM to disk when absolutely
necessary. What should you do?

A.
Delete the /dev/vmswap file on the node
B.
Delete the /etc/swap file on the node
C.
Set the ram.swap parameter to 0 in core-site.xml
D.
Set vm.swapfile file on the node
E.
Delete the /swapfile file on the node
Ans is D
0
0
Improving Performance
This section summarizes some recent code improvements and configuration best practices.
Setting the vm.swappiness Linux Kernel Parameter
vm.swappiness is a Linux Kernel Parameter that controls how aggressively memory pages are swapped to disk. It can be set to a value between 0-100; the higher the value, the more aggressive the kernel is in seeking out inactive memory pages and swapping them to disk.
You can see what value vm.swappiness is currently set to by looking at /proc/sys/vm; for example:
cat /proc/sys/vm/swappiness
On most systems, it is set to 60 by default. This is not suitable for Hadoop clusters nodes, because it can cause processes to get swapped out even when there is free memory available. This can affect stability and performance, and may cause problems such as lengthy garbage collection pauses for important system daemons. Cloudera recommends that you set this parameter to 0; for example:
# sysctl -w vm.swappiness=0
0
0
I can’t understand you.
These option not appears in the answers!
0
0
I disagree with you. Cloudera recommends to set this value to 10
https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_admin_performance.html#xd_583c10bfdbd326ba-7dae4aa6-147c30d0933–7fd5__section_xpq_sdf_jq
Search for this on the web page
“sysctl -w vm.swappiness=10”
Answer is D
0
0
I concur with Hitesh; the answers shown above are NOT the actual wording on the exam (I just took this last week, 10/8/2015). One of the answers was to set vm.swappiness to “0” and that, in my opinion, is the right answer. It does not show up above, but it is worded this way on the CCA-500 exam.
0
0
For test, the option D is follows and it appears to be the correct answer
Set vm.swappiness=0 in /etc/sysctl.conf
(I tried the test on 22-Nov-2015 and scored 87%. Same questions and options were asked but in different order. I chose the answers based on the discussion in this site)
0
0
set vm.swappiness to “0”
0
0
Cloudera said:
to reduce the swappiness of the system
– Set”vm.swappiness” to “0” or “5” in /etc/sysctl.conf
0
0
D
0
0
In /etc/sysctl.conf file, we need to set vm.swapiness to “0”.
Below is the code snippet
=============
# General Tunings
vm.swappiness = 0
vm.overcommit_memory = 1
0
0
Cloudera recommends that you set vm.swappiness to a value between 1 and 10, preferably 1, for minimum swapping.
sudo sysctl -w vm.swappiness=1
http://www.cloudera.com/documentation/enterprise/latest/topics/cdh_admin_performance.html#xd_583c10bfdbd326ba-7dae4aa6-147c30d0933–7fd5
0
0
In the latest exam, there is an option set vm.swappiness = 0 which is the correct answer
0
0
Set vm.swappiness=0
It’s an installation recommendation and also is highly recommended by Cloudera. Just read the manual.
0
0
Answer is D
For those saying to set to 0 is wrong. that was OLD recommendation.
On most systems, it is set to 60 by default. This is not suitable for Hadoop cluster nodes, because it can cause processes to get swapped out even when there is free memory available. This can affect stability and performance, and may cause problems such as lengthy garbage collection pauses for important system daemons. Cloudera recommends that you set this parameter to 10 or less; for example:
# sysctl -w vm.swappiness=10
Cloudera previously recommended a setting of 0, but in recent kernels (such as those included with RedHat 6.4 and higher, and Ubuntu 12.04 LTS and higher) a setting of 0 might lead to out of memory issues per this
REF: https://www.cloudera.com/documentation/enterprise/5-3-x/topics/cdh_admin_performance.html#xd_583c10bfdbd326ba-7dae4aa6-147c30d0933–7fd5__section_xpq_sdf_jq
0
0