最近在做web網盤的系統,網盤最基本的功能便是文件上傳,但是文件上傳當遇到大文件的時候,在web端按傳統方式上傳簡直是災難,所以大文件上傳可以採用分片上傳的辦法。其主要思路是:1.大文件上傳時進行分片;2.分片上傳;3.對分片文件進行合併。
思路比較清晰簡單,但一些問題在於:1.大文件如何進行分片?2.分片如何進行記錄和存儲?3.如何校驗每個分片文件的唯一性和順序性?4.如何合併文件?
對於大文件如何分片,這個主要是在前端進行解決,在這裡推薦大家用百度的WebUploader來實現前端所需。
對於對分片之後的文件進行存儲的問題,我採用了臨時文件存儲的辦法,臨時文件存儲著每個分塊對應字節位的狀態。
對於分片文件的區分,這裡可以採用MD5碼的方式(不清楚MD5碼的可以先查一下),MD5碼簡單理解就像每個文件的身份證一樣,每個不同的文件都有自己唯一的MD5碼。
對於合併文件的時候,前端在對文件分片之後,在請求服務端合併的時候,請求中要帶上分片序號和大小,服務器按照請求數據中給的分片序號和每片分塊大小算出開始位置,與讀取到的文件片段數據,寫入文件即可。這裡合併後的文件會存儲倆個路徑,一個是當前網盤目錄下的路徑,一個是真實的永久路徑(目的是為了實現秒傳的功能)。
前端分片的代碼就不貼了,主要用的百度的WebUploader。
這裡主要貼一些服務端的主要的代碼
文件上傳
/**
* 上傳文件
*
* @param file 文件
* @param wholeMd5 文件整體md5碼
* @param name 文件名
* @param type 文件類型
* @param lastModifiedDate 上傳時間
* @param size 文件大小
* @param chunks 文件分塊數
* @param chunk 正在執行的塊
*/
@ApiOperation(value = "文件上傳", hidden = true)
@IgnoreUserToken
@ApiResponses({
@ApiResponse(code = 500, response = RestError.class, message = "錯誤")
})
@PostMapping(value = "upload")
public ResponseEntity<integer> fileUpload(@ApiParam(name = "文件") @RequestPart MultipartFile file,/<integer>
@ApiParam(name = "md5") @RequestParam String wholeMd5,
@ApiParam(name = "名稱") @RequestParam String name,
@ApiParam(name = "類型") @RequestParam String type,
@ApiParam(name = "日期") @RequestParam Date lastModifiedDate,
@ApiParam(name = "大小") @RequestParam long size,
@ApiParam(name = "開始位置") @RequestParam long start,
@ApiParam(name = "結束位置") @RequestParam long end,
@ApiParam(name = "總分塊數") @RequestParam(name = "chunks", defaultValue = "1") int chunks,
@ApiParam(name = "第幾個分塊,從0開始") @RequestParam(name = "chunk", defaultValue = "0") int chunk) {
try {
log.info("文件開始上傳");
this.fileServiceImpl.fileUpload(file.getInputStream(), wholeMd5, name, type, lastModifiedDate, size, chunks, chunk, start, end);
return ResponseEntity.ok(1);
} catch (Exception e) {
return new ResponseEntity(RestError.IO_ERROR.setReason(e.getMessage()).toString(), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
@Override
public boolean fileUpload(InputStream fileIS,
String wholeMd5,
String name, String type,
Date lastModifiedDate, long size,
int chunks,
int chunk,
long start,
long end) throws Exception {
boolean result = false;
try {
File tempDirFile = new File(fileDir, TEMP_DIR);
if (!tempDirFile.exists()) {
tempDirFile.mkdirs();
}
// 塊目錄文件夾
File wholeMd5FileDirectory = new File(tempDirFile.getAbsolutePath(), wholeMd5);
if (!wholeMd5FileDirectory.exists()) {
wholeMd5FileDirectory.mkdirs();
}
// 塊文件
File chunkFile = new File(wholeMd5FileDirectory.getAbsolutePath(), chunk + FILE_SEPARATOR + chunks + FILE_EXT);
long chunkSize = end - start;
if (!chunkFile.exists() || chunkFile.length() != chunkSize) {
// 創建新的塊文件
long startTime = System.currentTimeMillis();
log.info("創建建分片{} - {} ", start, end);
int length = StreamUtils.copy(fileIS, new FileOutputStream(chunkFile));
long endTime = System.currentTimeMillis();
log.info("分片上傳耗時{}毫秒", (endTime - startTime));
if (length == (end - start)) {
result = true;
}
}
} catch (Exception e) {
log.error("文件上傳出錯{}", e.getCause());
e.printStackTrace();
throw e;
}
return result;
}
檢查文件的MD5
/**
* 檢查文件的md5
*
* @param md5 文件md5
* @param fileSize 文件大小
* @return
*/
@ApiOperation(value = "檢查文件的md5")
@GetMapping(value = "checkFileMd5/{md5}/{fileSize}/{md5CheckLength}")
@ApiResponses({
@ApiResponse(code = 500, response = RestError.class, message = "錯誤")
})
public ResponseEntity<integer> checkFileMd5(@ApiParam("文件md5碼") @PathVariable String md5,/<integer>
@ApiParam("文件大小") @PathVariable long fileSize,
@ApiParam("文件用來檢查md5的長度") @PathVariable long md5CheckLength) {
try {
log.info("開始檢驗md5[{}],是否存在", md5);
return ResponseEntity.ok(this.fileServiceImpl.checkFileMd5(md5, fileSize, md5CheckLength) ? 1 : 0);
} catch (Exception e) {
return new ResponseEntity(RestError.DATABASE_ERROR.setReason(e.getMessage()).toString(), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
@Override
public boolean checkFileMd5(String md5, long fileSize, long md5CheckLength) {
Optional<uploadfileinfo> uploadFileInfo = this.uploadFileDao.findByMd5AndSize(md5, fileSize);/<uploadfileinfo>
boolean isExist = false;
if (uploadFileInfo.isPresent()) {
File wholeFile = new File(this.fileDir, uploadFileInfo.get().getDfsPath());
if (wholeFile.exists() && wholeFile.length() == fileSize && md5.equals(FileUtils.md5(wholeFile, 0, md5CheckLength))) {
isExist = true;
}
}
log.info("{}的文件{}存在", md5, isExist ? "" : "不");
return isExist;
}
檢查分片是否存在
/**
* 檢查分片是否存在
*
* @param md5
* @param chunk
* @param chunks
* @param chunkStart
* @param chunkEnd
* @return
*/
@ApiOperation(value = "檢查分片是否存在")
@ApiResponses({
@ApiResponse(code = 500, response = RestError.class, message = "錯誤")
})
@GetMapping(value = "checkChunk/{md5}/{blockMd5}/{md5CheckLength}/{chunk}/{chunks}/{chunkStart}/{chunkEnd}")
public ResponseEntity<integer> checkChunk(@ApiParam("文件md5碼") @PathVariable String md5,/<integer>
@ApiParam("分塊文件md5碼") @PathVariable String blockMd5,
@ApiParam("用來檢測分塊文件md5碼的長度") @PathVariable long md5CheckLength,
@ApiParam("第幾個分塊,從0開始") @PathVariable int chunk,
@ApiParam("總分塊數") @PathVariable int chunks,
@ApiParam("分塊開始位於的文件位置") @PathVariable long chunkStart,
@ApiParam("分塊結束位於的文件位置") @PathVariable long chunkEnd) {
try {
log.info("開始檢驗分片[{}]-[{}]的md5[{}],是否存在", chunk, chunks, blockMd5);
return ResponseEntity.ok(this.fileServiceImpl.checkChunk(md5, blockMd5, md5CheckLength, chunk, chunks, chunkStart, chunkEnd) ? 1 : 0);
} catch (Exception e) {
return new ResponseEntity(RestError.DATABASE_ERROR.setReason(e.getMessage()).toString(), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
@Override
public boolean checkChunk(String md5, String blockMd5, long md5CheckLength, int chunk, int chunks, long chunkStart, long chunkEnd) {
boolean isExist = false;
File chunkFile = new File(fileDir, TEMP_DIR + File.separator + md5 + File.separator + chunk + FILE_SEPARATOR + chunks + FILE_EXT);
if (chunkFile.exists() && chunkFile.length() == (chunkEnd - chunkStart)) {
String calBlockMd5 = FileUtils.md5(chunkFile, 0, md5CheckLength);
if (blockMd5.equals(calBlockMd5)) {
isExist = true;
}
}
log.info("{}的{}-{}分塊{}存在", md5, chunk, chunks, isExist ? "" : "不");
return isExist;
}
合併文件
/**
* 合併文件
*
* @param fileInfo
* @return
*/
@ApiOperation(value = "合併文件", notes = "把分片上傳的數據合併到一個文件")
@ApiResponses({
@ApiResponse(code = 500, response = RestError.class, message = "錯誤")
})
@PostMapping(value = "mergeChunks")
public ResponseEntity<integer> mergeChunks(@Validated @RequestBody FileInfo fileInfo, BindingResult bindingResult) {/<integer>
log.info("開始合併文件");
if (bindingResult.hasErrors()) {
log.error("錯誤的參數請求");
return new ResponseEntity("錯誤的參數請求", HttpStatus.BAD_REQUEST);
} else {
try {
DataEntity dataEntity = this.fileServiceImpl.mergeChunks(fileInfo);
log.info("合併文件完成, 保存的dataEntityId為:{}", dataEntity != null ? dataEntity.getId() : null);
return ResponseEntity.ok(dataEntity != null ? 1 : 0);
} catch (FileMargeException e) {
log.error(e.getMessage(), e);
return new ResponseEntity(RestError.FILE_MARGE_ERROR.setReason(e.getMessage()).toString(), HttpStatus.INTERNAL_SERVER_ERROR);
} catch (FileNotAllException e) {
log.error(e.getMessage(), e);
return new ResponseEntity(RestError.FILE_NOTALL_ERROR.setReason(e.getMessage()).toString(), HttpStatus.INTERNAL_SERVER_ERROR);
} catch (IOException e) {
log.error(e.getMessage(), e);
return new ResponseEntity(RestError.IO_ERROR.setReason(e.getMessage()).toString(), HttpStatus.INTERNAL_SERVER_ERROR);
}
}
}
/**
* 合併文件
*
* @param fileInfo
* @return {DataEntity}
* @throws FileNotAllException
* @throws IOException
*/
@Override
public DataEntity mergeChunks(FileInfo fileInfo) throws IOException, FileNotAllException, FileMargeException {
// 先檢查庫裡是否有文件的存記錄
Optional<uploadfileinfo> uploadFileInfoOptional = this.uploadFileDao.findByMd5AndSize(fileInfo.getMd5(), fileInfo.getSize());/<uploadfileinfo>
log.info("檢查文件信息是否在數據庫中存在");
UploadFileInfo uploadFileInfo = null;
if (uploadFileInfoOptional.isPresent()) {
log.info("文件信息:{}", fileInfo);
uploadFileInfo = uploadFileInfoOptional.get();
}
if (uploadFileInfo == null) {
uploadFileInfo = new UploadFileInfo();
}
//再檢查文件是否存在
log.info("檢查真實文件");
File wholeFile = new File(getRealFileRoot(), fileInfo.getMd5() + FILE_SEPARATOR + fileInfo.getName());
if (!wholeFile.exists() || wholeFile.length() != fileInfo.getSize()) {
log.info("文件不存在或者文件長度不符合! }");
if (wholeFile.exists()) {
log.info("長度為{}!={},", wholeFile.length(), fileInfo.getSize());
}
File tempDirFile = new File(fileDir, TEMP_DIR + File.separator + fileInfo.getMd5());
try {
if (tempDirFile.exists()) {
log.info("文件分片目錄存在");
// 獲取該目錄下所有的碎片文件
File[] partFiles = tempDirFile.listFiles((f, name) -> name.endsWith(FILE_EXT));
log.info("文件分片個數為:", partFiles.length);
if (partFiles.length > 0) {
Arrays.sort(partFiles, (File f1, File f2) -> {
String name1 = f1.getName();
String name2 = f2.getName();
if (name1.length() < name2.length()) {
return -1;
} else if (name1.length() > name2.length()) {
return 1;
} else {
return name1.compareTo(name2);
}
});
long size = 0;
FileChannel resultFileChannel = new FileOutputStream(wholeFile, true).getChannel();
for (int i = 0; i < partFiles.length; i++) {
size += partFiles[i].length();
if (size > wholeFile.length()) {
log.info("合併第{}塊的文件{}", i, partFiles[i].getName());
// FileUtils.copy(partFiles[i], wholeFile, size);
FileChannel inChannel = new FileInputStream(partFiles[i]).getChannel();
resultFileChannel.transferFrom(inChannel, resultFileChannel.size(), inChannel.size());
inChannel.close();
}
}
if (size < wholeFile.length()) {
log.info("分片文件不完整");
throw new FileNotAllException();
}
}
log.info("刪除分片數據信息");
this.threadPoolUtil.getExecutor().execute(() -> {
tempDirFile.listFiles(child -> child.delete());
tempDirFile.delete();
});
}
} catch (Exception e) {
throw new FileMargeException();
}
}
if (uploadFileInfo.getId() == null) {
log.info("保存上傳的文件信息");
uploadFileInfo.setCreateTime(fileInfo.getCreateTime());
uploadFileInfo.setMd5(fileInfo.getMd5());
uploadFileInfo.setType(fileInfo.getType());
uploadFileInfo.setSize(wholeFile.length());
uploadFileInfo.setDfsPath(wholeFile.getAbsolutePath().substring(this.fileDir.length()+1));
this.uploadFileDao.save(uploadFileInfo);
}
// 文件大小, 應該在合併完成的時候更新
log.info("獲取父目錄信息");
DataEntity parent = this.getDataEntityById(fileInfo.getParentId());
// 如果文件信息裡包含文件的相對路徑, 就應該創建文件上傳的真實目錄
String path = fileInfo.getPath();
if (StringUtils.hasText(path)) {
log.info("包含相對目錄,進行相對目錄的創建");
path = FilenameUtils.getFullPathNoEndSeparator(path);
String[] paths = path.split("/");
for (String tempPath : paths) {
if (StringUtils.hasText(tempPath)) {
DataEntity dataEntity = this.dataEntityDao.findByNameAndParentAndUserId(tempPath, parent, UserUtil.getUserId());
if (dataEntity == null) {
dataEntity = new DataEntity();
dataEntity.setName(tempPath);
dataEntity.setDir(true);
dataEntity.setParent(parent);
parent = this.dataEntityDao.save(dataEntity);
} else {
parent = dataEntity;
}
}
}
}
log.info("創建目錄信息");
DataEntity dataEntity = new DataEntity();
dataEntity.setName(fileInfo.getName());
dataEntity.setExt(fileInfo.getExt());
dataEntity.setDataType(fileInfo.getFileType());
dataEntity.setFileInfo(uploadFileInfo);
dataEntity.setParent(parent);
dataEntity.setSize(uploadFileInfo.getSize());
dataEntity = this.saveAndRenameFile(dataEntity);
this.saveAndCreateFile(dataEntity);
//判斷上傳文件的類型,選擇調用解析接口
String fileType = fileInfo.getFileType();
if ("images".equals(fileType)||"vector".equals(fileType)||"terrain".equals(fileType)||"original".equals(fileType)) {
String resultInfo = analysis(dataEntity,fileInfo);
log.info("解析結果:"+resultInfo);
}
return dataEntity;
}
關於秒傳功能,其實原理就是檢驗文件MD5,在一個文件上傳前先獲取文件內容MD5值或者部分取值MD5,然後在查找自己的記錄是否已存在相同的MD5,如果存在就直接從服務器真實路徑取,而不需要重新進行分片上傳了,從而達到秒傳的效果。
閱讀更多 架構師的修煉之路 的文章