Input validation and representation problems ares caused by metacharacters, alternate encodings and numeric representations. Security problems result from trusting input. The issues include: "Buffer Overflows," "Cross-Site Scripting" attacks, "SQL Injection," and many others.
PreferenceActivity
fails to restrict the fragment classes it can instantiate.PreferenceActivity
and supply it with an :android:show_fragment
Intent extra in order to make it load an arbitrary class. The malicious app can make the PreferenceActivity
load an arbitrary Fragment
of the vulnerable app, which is normally loaded inside a non-exported Activity, exposing it to the attacker.
@Override
public static boolean isFragmentValid(Fragment paramFragment)
{
return true;
}
apex:iframe
source URL may lead to malicious content being loaded within the Visualforce page.iframe
URL without being validated.Salesforce.com
, the victim will trust the page and provide all of the requested information.iframesrc
URL parameter is directly used as the apex:iframe
target URL.
<apex:page>
<apex:iframe src="{!$CurrentPage.parameters.iframesrc}"></apex:iframe>
</apex:page>
iframesrc
parameter set to a malicious website, the frame will be rendered with the content of the malicious website.
<iframe src="http://evildomain.com/">
Metadata
object is created from an untrusted source, which might allow an attacker to control critical protocol fields.Metadata
class is often used to house header data for an underlying protocol used by Google Remote Procedure Call (gRPC). When the underlying protocol is HTTP, control of the data in a Metadata
object can make the system vulnerable to HTTP Header Manipulation. Other attack vectors are possible and are primarily based on the underlying protocol.Metadata
object.
...
String badData = getUserInput();
Metadata headers = new Metadata();
headers.put(Metadata.Key.of("sample", Metadata.ASCII_STRING_MARSHALLER), badData);
...
NameNode
, DataNode
, JobTraker
to change the state of the cluster.Job
submission in a typical client application which takes inputs from command line on Hadoop cluster master machine:
public static void run(String args[]) throws IOException {
String path = "/path/to/a/file";
DFSclient client = new DFSClient(arg[1], new Configuration());
ClientProtocol nNode = client.getNameNode();
/* This sets the ownership of a file pointed by the path to a user identified
* by command line arguments.
*/
nNode.setOwner(path, args[2], args[3]);
...
}
Job
submitted to a Hadoop cluster can be tampered in a hostile environment.JobConf
that controls a client job.Job
submission in a typical client application which takes inputs from command line on Hadoop cluster master machine:Example 2: The following code shows a case where an attacker controls the running job to be killed through command line arguments:
public void run(String args[]) throws IOException {
String inputDir = args[0];
String outputDir = args[1];
// Untrusted command line argument
int numOfReducers = Integer.parseInt(args[3]);
Class mapper = getClassByName(args[4]);
Class reducer = getClassByName(args[5]);
Configuration defaults = new Configuration();
JobConf job = new JobConf(defaults, OptimizedDataJoinJob.class);
job.setNumMapTasks(1);
// An attacker may set random values that exceed the range of acceptable number of reducers
job.setNumReduceTasks(numOfReducers);
return job;
}
public static void main(String[] args) throws Exception {
JobID id = JobID.forName(args[0]);
JobConf conf = new JobConf(WordCount.class);
// configure this JobConf instance
...
JobClient.runJob(conf);
RunningJob job = JobClient.getJob(id);
job.killJob();
}
let template = Handlebars.compile('{{foo}}', { noEscape: true })
Prototype Pollution
attacks.__defineGetter__
function.
let template2 = Handlebars.compile('{{foo}}')
console.log(template2({ foo: argument }, {
allowProtoMethodsByDefault: true,
allowedProtoMethods: {
__defineGetter__: true
}
}))