点击这里给我发消息 点击这里给我发消息

ASP.NET MVC中大文件异步上传实现方案

添加时间:2013-12-6
    相关阅读: 设计 ASP 开发 ASP.NET 解决方案 方案 页面
在ASP.NET中通过HTTP上传大文件是一个由来已久的挑战,它是许多活跃的ASP.NET论坛最常讨论的问题之一,除了处理大文件外,用户还经常被要求要显示出文件上传的进度,当你需要直接控制从浏览器上传数据流时,你会四处碰壁。51CTO.com之前就曾针对性的报道过《解除ASP.NET上传文件的大小限制》和《ASP.NET大文件上传开发总结》等文章。

  绝大多数人认为在ASP.NET中上传大文件有以下这些解决方案:

  ◆不要这样做。你最好是在页面中嵌入一个Silverlight或Flash进程上传文件。

  ◆不要这样做。因为HTTP本身设计就不是为了上传大文件,重新思考你要的功能。

  ◆不要这样做。ASP.NET本身设计最大也就能处理2GB大小的文件。

  ◆购买商业产品,如SlickUpload,它使用了一个HttpModule实现了文件流分块。

  ◆使用开源产品,如NeatUpload,它使用了一个HttpModule实现了文件流分块。

  最近我接到一个任务,需构建一个上传工具实现以下功能:

  ◆必须工作在HTTP协议

  ◆必须允许非常大的文件上传(会大于2GB)

  ◆必须允许断点续传

  ◆必须允许并行上传

  因此前三个解决方案都不适应我的需求,其它解决方案对于我而言又太笨重了,因此我开始着手解决在ASP.NET MVC中的这个问题,如果有这方面的开发背景,你一定了解大部分问题最终都归结于对ASP.NET输入流和连锁请求过程的控制,网上的资料一般都是这样描述的,只要你的代码访问了HttpRequest的InputStream属性,在你访问流之前,ASP.NET就会缓存整个上传的文件,这就意味着当我向云服务上传文件时,我必须等待整个大文件抵达服务器,然后才能将其传输到预定目的地,这意味着需要两倍的时间。

  首先,我们推荐你阅读一下Scott Hanselman的有关ASP.NET MVC文件上传文章,地址http://www.hanselman.com/blog/CommentView.aspx?guid=bc137b6b-d8d0-47d1-9795-f8814f7d1903,先对文件上传有一个大致的了解,但Scott Hanselman的方法是不能上传大文件的,根据Scott Hanselman的方法,你只需要修改一下web.config文件,确保ASP.NET允许最大支持2GB大小的文件上传,不要担心,这样设置并不会吃掉你的内存,因为凡是大于256KB的数据都被缓存到磁盘上去了。

  ﹤system.web﹥  ﹤httpruntime requestlengthdiskthreshold="256" maxrequestlength="2097151"﹥  ﹤/httpruntime﹥﹤/system.web﹥ 这是一个简单的适合大多数应用的解决办法,但我的任务中不能借用这种方法,即使会将数据缓存到磁盘中,但这种类似于另存为的方法也会使用大量的内存。

ASP.NET MVC中大文件异步上传实现方案(图一)

  图 1 :通过缓存整个文件,然后另存为的方式会使内存消耗突然上升

  那么在ASP.NET MVC中通过直接访问流,不触发任何缓存机制,上传大文件该如何实现呢?解决办法就是尽量远离ASP.NET,我们先来看一看UploadController,它有三个行为方法,一个是索引我们上传的文件,一个是前面讨论的缓存逻辑,另一个是基于实时流的方法。

  public class UploadController : Controller

  {

  [AcceptVerbs(HttpVerbs.Get)]

  [Authorize]

  public ActionResult Index()

  {

  return View();

  }

  [AcceptVerbs(HttpVerbs.Post)]

  public ActionResult BufferToDisk()

  {

  var path = Server.MapPath("~/Uploads");

  foreach (string file in Request.Files)

  {

  var fileBase = Request.Files[file];

  try             {

  if (fileBase.ContentLength > 0)

  {

  fileBase.SaveAs(Path.Combine(path, fileBase.FileName));

  }

  }

  catch (IOException)

  {

  }

  }

  return RedirectToAction("Index", "Upload");

  }

  //[AcceptVerbs(HttpVerbs.Post)]

  //[Authorize]

  public void LiveStream()

  {

  var path = Server.MapPath("~/Uploads");

  var context = ControllerContext.HttpContext;

  var provider = (IServiceProvider)context;

  var workerRequest = (HttpWorkerRequest)provider.GetService(typeof(HttpWorkerRequest));

  //[AcceptVerbs(HttpVerbs.Post)]

  var verb = workerRequest.GetHttpVerbName();

  if(!verb.Equals("POST"))

  {

  Response.StatusCode = (int)HttpStatusCode.NotFound;

  Response.SuppressContent = true;

  return;

  }

  //[Authorize]

  if(!context.User.Identity.IsAuthenticated)

  {

  Response.StatusCode = (int)HttpStatusCode.Unauthorized;

  Response.SuppressContent = true;

  return;

  }

  var encoding = context.Request.ContentEncoding;

  var processor = new UploadProcessor(workerRequest);

  processor.StreamToDisk(context, encoding, path);

  //return RedirectToAction("Index", "Upload");

  Response.Redirect(Url.Action("Index", "Upload"));

  }

  }

  虽然这里明显缺少一两个类,但基本的方法还是讲清楚了,看起来和缓存逻辑并没有太大的不同之处,我们仍然将流缓存到了磁盘,但具体处理方式却有些不同了,首先,没有与方法关联的属性,谓词和授权限制都被移除了,使用手动等值取代了,使用手工响应操作而不用ActionFilterAttribute声明的原因是这些属性涉及到了一些重要的ASP.NET管道代码,实际上在我的代码中,我还特意拦截了原生态的HttpWorkerRequest,因为它不能同时做两件事情。

  HttpWorkerRequest有VIP访问传入的请求,通常它是由ASP.NET本身支持工作的,但我们绑架了请求,然后欺骗剩下的请求,让它们误以为前面的请求已经全部得到处理,为了做到这一点,我们需要上面例子中未出现的UploadProcessor类,这个类的职责是物理读取来自浏览器的每个数据块,然后将其保存到磁盘上,因为上传的内容被分解成多个部分,UploadProcessor类需要找出内容头,然后拼接成带状数据输出,这一可以在一个上传中同时上传多个文件。

  internal class UploadProcessor

  {

  private byte[] _buffer;

  private byte[] _boundaryBytes;

  private byte[] _endHeaderBytes;

  private byte[] _endFileBytes;

  private byte[] _lineBreakBytes;

  private const string _lineBreak = "\r\n";

  private readonly Regex _filename =          new Regex(@"Content-Disposition:\s*form-data\s*;\s*name\s*=\s*""file""\s*;\s*filename\s*=\s*""(.*)""",

  RegexOptions.IgnoreCase | RegexOptions.Compiled);

  private readonly HttpWorkerRequest _workerRequest;

  public UploadProcessor(HttpWorkerRequest workerRequest)

  {

  _workerRequest = workerRequest;

  }

  public void StreamToDisk(IServiceProvider provider, Encoding encoding, string rootPath)

  {

  var buffer = new byte[8192];

  if (!_workerRequest.HasEntityBody())

  {

  return;

  }

  var total = _workerRequest.GetTotalEntityBodyLength();

  var preloaded = _workerRequest.GetPreloadedEntityBodyLength();

  var loaded = preloaded;

  SetByteMarkers(_workerRequest, encoding);

  var body = _workerRequest.GetPreloadedEntityBody();

  if (body == null) // IE normally does not preload

  {

  body = new byte[8192];

  preloaded = _workerRequest.ReadEntityBody(body, body.Length);

  loaded = preloaded;

  }

  var text = encoding.GetString(body);

  var fileName = _filename.Matches(text)[0].Groups[1].Value;

  fileName = Path.GetFileName(fileName); // IE captures full user path; chop it

  var path = Path.Combine(rootPath, fileName);

  var files = new List {fileName};

  var stream = new FileStream(path, FileMode.Create);

  if (preloaded > 0)

  {

  stream = ProcessHeaders(body, stream, encoding, preloaded, files, rootPath);

  }

  // Used to force further processing (i.e. redirects) to avoid buffering the files again

  var workerRequest = new StaticWorkerRequest(_workerRequest, body);

  var field = HttpContext.Current.Request.GetType().GetField("_wr", BindingFlags.NonPublic | BindingFlags.Instance);          field.SetValue(HttpContext.Current.Request, workerRequest);

  if (!_workerRequest.IsEntireEntityBodyIsPreloaded())

  {

  var received = preloaded;

  while (total - received >= loaded && _workerRequest.IsClientConnected())

  {

  loaded = _workerRequest.ReadEntityBody(buffer, buffer.Length);

  stream = ProcessHeaders(buffer, stream, encoding, loaded, files, rootPath);

  received += loaded;

  }

  var remaining = total - received;

  buffer = new byte[remaining];

  loaded = _workerRequest.ReadEntityBody(buffer, remaining);

  stream = ProcessHeaders(buffer, stream, encoding, loaded, files, rootPath);

  }

  stream.Flush();

  stream.Close();

  stream.Dispose();

  }

  private void SetByteMarkers(HttpWorkerRequest workerRequest, Encoding encoding)

  {

  var contentType = workerRequest.GetKnownRequestHeader(HttpWorkerRequest.HeaderContentType);

  var bufferIndex = contentType.IndexOf("boundary=") + "boundary=".Length;

  var boundary = String.Concat("--", contentType.Substring(bufferIndex));

  _boundaryBytes = encoding.GetBytes(string.Concat(boundary, _lineBreak));

  _endHeaderBytes = encoding.GetBytes(string.Concat(_lineBreak, _lineBreak));

  _endFileBytes = encoding.GetBytes(string.Concat(_lineBreak, boundary, "--", _lineBreak));

  _lineBreakBytes = encoding.GetBytes(string.Concat(_lineBreak + boundary + _lineBreak));

  }

  private FileStream ProcessHeaders(byte[] buffer, FileStream stream, Encoding encoding, int count, ICollection files, string rootPath)      {

  buffer = AppendBuffer(buffer, count);

  var startIndex = IndexOf(buffer, _boundaryBytes, 0);

  if (startIndex != -1)

  {

  var endFileIndex = IndexOf(buffer, _endFileBytes, 0);

  if (endFileIndex != -1)

  {

  var precedingBreakIndex = IndexOf(buffer, _lineBreakBytes, 0);

  if (precedingBreakIndex > -1)

  {

  startIndex = precedingBreakIndex;

  }

  endFileIndex += _endFileBytes.Length;

  var modified = SkipInput(buffer, startIndex, endFileIndex, ref count);

  stream.Write(modified, 0, count);

  }

  else

  {

  var endHeaderIndex = IndexOf(buffer, _endHeaderBytes, 0);

  if (endHeaderIndex != -1)

  {

  endHeaderIndex += _endHeaderBytes.Length;

  var text = encoding.GetString(buffer);

  var match = _filename.Match(text);

  var fileName = match != null ? match.Groups[1].Value : null;

  fileName = Path.GetFileName(fileName); // IE captures full user path; chop it

  if (!string.IsNullOrEmpty(fileName) && !files.Contains(fileName))

  {

  files.Add(fileName);

  var filePath = Path.Combine(rootPath, fileName);

  stream = ProcessNextFile(stream, buffer, count, startIndex, endHeaderIndex, filePath);

  }

  else

  {

  var modified = SkipInput(buffer, startIndex, endHeaderIndex, ref count);

  stream.Write(modified, 0, count);

  }

  }

  else

  {

  _buffer = buffer;

  }

  }

  }

  else

  {

  stream.Write(buffer, 0, count);

  }

  return stream;

  }

  private static FileStream ProcessNextFile(FileStream stream, byte[] buffer, int count, int startIndex, int endIndex, string filePath)      {

  var fullCount = count;

  var endOfFile = SkipInput(buffer, startIndex, count, ref count);

  stream.Write(endOfFile, 0, count);

  stream.Flush();

  stream.Close();

  stream.Dispose();

  stream = new FileStream(filePath, FileMode.Create);

  var startOfFile = SkipInput(buffer, 0, endIndex, ref fullCount);

  stream.Write(startOfFile, 0, fullCount);

  return stream;

  }

  private static int IndexOf(byte[] array, IList value, int startIndex)

  {

  var index = 0;

  var start = Array.IndexOf(array, value[0], startIndex);

  if (start == -1)

  {

  return -1;

  }

  while ((start + index) < array.Length)

  {

  if (array[start + index] == value[index])

  {

  index++;

  if (index == value.Count)

  {

  return start;

  }

  }

  else

  {

  start = Array.IndexOf(array, value[0], start + index);

  if (start != -1)

  {

  index = 0;

  }

  else

  {

  return -1;

  }

  }

  }

  return -1;

  }

  private static byte[] SkipInput(byte[] input, int startIndex, int endIndex, ref int count)

  {

  var range = endIndex - startIndex;

  var size = count - range;

  var modified = new byte[size];

  var modifiedCount = 0;

  for (var i = 0; i < input.Length; i++)

  {

  if (i >= startIndex && i < endIndex)

  {

  continue;

  }

  if (modifiedCount >= size)

  {

  break;

  }

  modified[modifiedCount] = input[i];

  modifiedCount++;

  }

  input = modified;

  count = modified.Length;

  return input;

  }

  private byte[] AppendBuffer(byte[] buffer, int count)

  {

  var input = new byte[_buffer == null ? buffer.Length : _buffer.Length + count];

  if (_buffer != null)

  {

  Buffer.BlockCopy(_buffer, 0, input, 0, _buffer.Length);

  }

  Buffer.BlockCopy(buffer, 0, input, _buffer == null ? 0 : _buffer.Length, count);

  _buffer = null;

  return input;

  }

  }

  在处理代码的中间位置,你应该注意到了另一个类StaticWorkerRequest,这个类负责欺骗ASP.NET,在点击提交按钮时,它欺骗ASP.NET,让他认为没有文件上传,这是必需的,因为当上传完毕时,如果我们要重定向到所需的页面时,ASP.NET将会检查到在HTTP实体主体中仍然有数据,然后会尝试缓存整个上传,于是我们兜了一圈又回到了原点,为了避免这种情况,我们必须欺骗HttpWorkerRequest,将它注入到HttpContext中,获得请求开始部分的StaticWorkerRequest,它是唯一有用的数据。

  internal class StaticWorkerRequest : HttpWorkerRequest

  {

  readonly HttpWorkerRequest _request;

  private readonly byte[] _buffer;

  public StaticWorkerRequest(HttpWorkerRequest request, byte[] buffer)

  {

  _request = request;

  _buffer = buffer;

  }

  public override int ReadEntityBody(byte[] buffer, int size)

  {

  return 0;

  }

  public override int ReadEntityBody(byte[] buffer, int offset, int size)

  {

  return 0;

  }

  public override byte[] GetPreloadedEntityBody()

  {

  return _buffer;

  }

  public override int GetPreloadedEntityBody(byte[] buffer, int offset)

  {

  Buffer.BlockCopy(_buffer, 0, buffer, offset, _buffer.Length);

  return _buffer.Length;

  }

  public override int GetPreloadedEntityBodyLength()

  {

  return _buffer.Length;

  }

  public override int GetTotalEntityBodyLength()

  {

  return _buffer.Length;

  }

  public override string GetKnownRequestHeader(int index)

  {

  return index == HeaderContentLength    ? "0"    : _request.GetKnownRequestHeader(index);

  }

  // All other methods elided, they're just passthrough  } 使用StaticWorkerRequest建立虚假的声明,现在你可以在ASP.NET MVC中通过直接访问数据流上传大文件,使用这个代码作为开始,你可以很容易地保存过程数据,并使用Ajax调用另一个控制器行为展示其进度,将大文件缓存到一个临时区域,可以实现断点续传,不用再等待ASP.NET进程将整个文件缓存到磁盘上,同样,保存文件时也不用消耗另存为方法那么多的内存了。

  图 2: 内存消耗更平稳

ASP.NET MVC中大文件异步上传实现方案(图二)

本文作者:未知
咨询热线:020-85648757 85648755 85648616 0755-27912581 客服:020-85648756 0755-27912581 业务传真:020-32579052
广州市网景网络科技有限公司 Copyright◎2003-2008 Veelink.com. All Rights Reserved.
广州商务地址:广东省广州市黄埔大道中203号(海景园区)海景花园C栋501室
= 深圳商务地址:深圳市宝源路华丰宝源大厦606
研发中心:广东广州市天河软件园海景园区 粤ICP备05103322号 工商注册