Cursor AI IDE 开发者工具

官网下载:https://www.cursor.so/

Cursor 是一款智能开发者编程工具,底层是由Chat-GPT3.5 or Chat-GPT4.0支持的,不需要科学上网,国内可以直接使用。

重点:免费的 ,同时不需要账号登录。

安装支持:windwos、linux、mac

支持语言:支持 java、php、html、js、py、vue、go、css、c 等

Cursor 用法

先说下用法,很简单就两个快捷键操作:

Ctrl+K 快捷键:把输出数据直接写入文件中。Ctrl+L 快捷键:把输出数据展示到右侧面板中(输出的数据不会直接输出的文件中),类似智能问答系统,根据上下文有问有答。

Ctrl+K

Ctrl + K 快捷键:输入问题后,会把生成的答案直接写入到文档中。

不选中内容(代码)时,直接在文件光标位置开始编写内容。选中内容(代码) 时,会在选择范围内进行编辑内容。

手动创建文件 main.html :

#快捷键

Ctrl + K

输入:使用html编写一个小游戏

#下面会帮你自动随机生成一个小游戏,由于没有指定游戏名称或类型。

效果:

小游戏

欢迎来到小游戏

游戏规则:点击下面的按钮,看看你能得到多少分!

得分:0

Ctrl+L

Ctrl + KL 快捷键:输入问题后,会把生成的答案,展示到右侧的面板中,并且支持根据上下文继续问答。区别于 Ctrl + K ,不会把答写入到文档中。

不选中内容(代码)时,根据全文内容+问题 进行一个上下文解答,然后在右侧面板中展示问题答案。 选中内容(代码) 时,根据选中内容+问题 进行一个上下文解答,然后在右侧面板中展示问题答案。

#快捷键

Ctrl + K

输入内容为:使用html编写一个小游戏

#下面会帮你在面板中自动随机生成一个小游戏,由于没有指定游戏名称或类型。

Cursor 应用示例

支持 java、php、html、js、py、vue、go、css、c 等

尝试了哪些功能:

生成文件上传Controller添加swagger描述添加方法代码实现生成方法的注释和示例新建接口以及接口实现类完善实现类代码优化代码修复代码排查代码存在什么问题生成文件管理接口及实现类(多步骤,本地、minio、mongodb等实现类)生成分片上传的前后端代码(多步骤)html代码生成vue代码生成Neo4j图库操作生成… …

分片上传示例

使用java语言,先创建 main.java 文件,文件名可自定义

#快捷键

Ctrl + K

输入:编写分片上传文件控制器

效果图:

#快捷键

Ctrl + A 全选中

#快捷键

Ctrl + K

输入:添加swagger描述

到此,Ctrl + K 基本用法已经演示完成了,是不是发现了一些问题,生成的方法内部并没有具体代码实现,仅仅是定义了类和方法,那么我们要如何让其帮助我们实现方法内部的代码,下面将展开演示:

操作:

补充方法中的代码实现

Ctrl + K

输入:先校验文件,再把文件存储到服务器中,返回文件存储路径

演示图:

上传接口已经好了,上传文件块和上次文件完成(合并)这两个接口是有关系的,因此,要先选中这两个方法,再进行Ctrl + K ,输入:补充代码实现,演示如下:

#快捷键

手动先选中 上传文件块和上传文件完成 方法

#快捷键

Ctrl + K

输入:补充代码实现

演示图:

根据和后台接口生成前端请求示例

#手动选择类中的方法

#快捷键

Ctrl + L

输入:生成html完整示例

#快捷键

Ctrl + L

输入:依据选中的方法,生成html完整示例

继续

演示图就不放了,和上面差不多。

通用文件管理接口及多实现示例

使用java语言,先创建 main.java 文件,文件名可自定义

#创建接口及接口的方法

Ctrl + K

输入:写一个通用文件管理接口,支持分片上传、合并、上传、下载、获取所有列表

#创建接口的实现类

Ctrl + K

输入:实现类 or minio实现类 or mongodb实现类

#完善接口实现类中代码实现, 先选中实现类

Ctrl + K

输入:补充代码实现

演示图:

不贴出来了,看最总成果展示贴出来的代码。

最总成果展示

以下代码,全部由Cursor生成,但是整个调试过程还是挺漫长的。

成果1-示例

注:代码全部是由ai自动生成

import org.apache.commons.io.IOUtils;

import org.bson.types.ObjectId;

import com.mongodb.client.MongoClient;

import com.mongodb.client.MongoClients;

import com.mongodb.client.MongoCollection;

import com.mongodb.client.MongoCursor;

import com.mongodb.client.gridfs.GridFSBucket;

import com.mongodb.client.gridfs.GridFSBuckets;

import com.mongodb.client.gridfs.GridFSDownloadByNameOptions;

import com.mongodb.client.gridfs.GridFSDownloadStream;

import com.mongodb.client.gridfs.GridFSFile;

import com.mongodb.client.gridfs.GridFSUploadOptions;

import com.mongodb.client.gridfs.GridFSUploadStream;

import org.apache.commons.net.ftp.FTP;

import org.apache.commons.net.ftp.FTPClient;

import org.apache.commons.net.ftp.FTPFile;

import org.bson.Document;

import io.minio.MinioClient;

import io.minio.Result;

import io.minio.errors.MinioException;

import io.minio.messages.Item;

import java.io.ByteArrayInputStream;

import java.io.ByteArrayOutputStream;

import java.io.File;

import java.io.FileInputStream;

import java.io.FileOutputStream;

import java.io.InputStream;

import java.util.ArrayList;

import java.util.List;

/**

* 文件管理器接口

*/

public interface FileManager {

/**

* 上传文件分片

* @param inputStream 文件输入流

* @param fileName 文件名

* @param contentType 文件类型

* @param chunkIndex 分片索引

* @param totalChunks 总分片数

* @throws Exception 异常

*/

void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception;

/**

* 合并文件

* @param fileName 文件名

* @param chunkSize 分片大小

* @param totalChunks 总分片数

* @throws Exception 异常

*/

void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception;

/**

* 上传文件

* @param inputStream 文件输入流

* @param fileName 文件名

* @param contentType 文件类型

* @throws Exception 异常

*/

void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception;

/**

* 下载文件

* @param fileName 文件名

* @return 文件输入流

* @throws Exception 异常

*/

InputStream downloadFile(String fileName) throws Exception;

/**

* 删除文件

* @param fileName 文件名

* @throws Exception 异常

*/

void deleteFile(String fileName) throws Exception;

/**

* 列出所有文件

* @return 文件列表

*/

List listFiles();

}

public class FileManagerImpl implements FileManager {

private static final String FILE_DIRECTORY = "C:/fileStorage/";

@Override

public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {

File directory = new File(FILE_DIRECTORY);

if (!directory.exists()) {

directory.mkdirs();

}

try (FileOutputStream outputStream = new FileOutputStream(new File(directory, fileName + "_" + chunkIndex))) {

IOUtils.copy(inputStream, outputStream);

} catch (Exception e) {

throw new Exception("Failed to upload file chunk", e);

}

}

@Override

public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {

File directory = new File(FILE_DIRECTORY);

if (!directory.exists()) {

directory.mkdirs();

}

try (FileOutputStream outputStream = new FileOutputStream(new File(directory, fileName))) {

for (int i = 0; i < totalChunks; i++) {

try (FileInputStream inputStream = new FileInputStream(new File(directory, fileName + "_" + i))) {

IOUtils.copy(inputStream, outputStream);

}

}

} catch (Exception e) {

throw new Exception("Failed to merge file", e);

}

}

@Override

public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {

File directory = new File(FILE_DIRECTORY);

if (!directory.exists()) {

directory.mkdirs();

}

try (FileOutputStream outputStream = new FileOutputStream(new File(directory, fileName))) {

IOUtils.copy(inputStream, outputStream);

} catch (Exception e) {

throw new Exception("Failed to upload file", e);

}

}

@Override

public InputStream downloadFile(String fileName) throws Exception {

File file = new File(FILE_DIRECTORY + fileName);

if (!file.exists()) {

throw new Exception("File not found");

}

return new FileInputStream(file);

}

@Override

public void deleteFile(String fileName) throws Exception {

File file = new File(FILE_DIRECTORY + fileName);

if (!file.exists()) {

throw new Exception("File not found");

}

file.delete();

}

@Override

public List listFiles() {

File directory = new File(FILE_DIRECTORY);

File[] files = directory.listFiles();

List documents = new ArrayList<>();

if (files != null) {

for (File file : files) {

Document document = new Document();

document.setName(file.getName());

document.setSize(file.length());

document.put("storageType", "local");

documents.add(document);

}

}

return documents;

}

}

public class MinioFileManagerImpl implements FileManager {

private static final String BUCKET_NAME = "file-storage";

private static final String ENDPOINT = "http://localhost:9000";

private static final String ACCESS_KEY = "minioadmin";

private static final String SECRET_KEY = "minioadmin";

private final MinioClient minioClient;

public MinioFileManagerImpl() throws Exception {

minioClient = new MinioClient(ENDPOINT, ACCESS_KEY, SECRET_KEY);

if (!minioClient.bucketExists(BUCKET_NAME)) {

minioClient.makeBucket(BUCKET_NAME);

}

}

@Override

public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {

String objectName = fileName + "_" + chunkIndex;

minioClient.putObject(BUCKET_NAME, objectName, inputStream, contentType);

}

@Override

public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {

List objectNames = new ArrayList<>();

for (int i = 0; i < totalChunks; i++) {

objectNames.add(fileName + "_" + i);

}

String mergedObjectName = fileName;

minioClient.composeObject(BUCKET_NAME, objectNames, mergedObjectName);

for (String objectName : objectNames) {

minioClient.removeObject(BUCKET_NAME, objectName);

}

}

@Override

public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {

minioClient.putObject(BUCKET_NAME, fileName, inputStream, contentType);

}

@Override

public InputStream downloadFile(String fileName) throws Exception {

return minioClient.getObject(BUCKET_NAME, fileName);

}

@Override

public void deleteFile(String fileName) throws Exception {

minioClient.removeObject(BUCKET_NAME, fileName);

}

@Override

public List listFiles() throws Exception {

List documents = new ArrayList<>();

Iterable> results = minioClient.listObjects(BUCKET_NAME);

for (Result result : results) {

Item item = result.get();

Document document = new Document();

document.setName(item.objectName());

document.setSize(item.size());

document.put("storageType", "minio");

documents.add(document);

}

return documents;

}

}

public class MongoFileManagerImpl implements FileManager {

private static final String DATABASE_NAME = "fileStorage";

private static final String COLLECTION_NAME = "files";

private final MongoCollection collection;

// public MongoFileManagerImpl() {

// MongoClient mongoClient = MongoClients.create();

// MongoDatabase database = mongoClient.getDatabase(DATABASE_NAME);

// collection = database.getCollection(COLLECTION_NAME);

// }

public MongoFileManagerImpl() {

MongoClient mongoClient = MongoClients.create(

MongoClientSettings.builder()

.applyToClusterSettings(builder ->

builder.hosts(Arrays.asList(new ServerAddress("localhost", 27017))))

.credential(MongoCredential.createCredential("username", "fileStorage", "password".toCharArray()))

.build());

MongoDatabase database = mongoClient.getDatabase(DATABASE_NAME);

collection = database.getCollection(COLLECTION_NAME);

}

@Override

public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName + "_" + chunkIndex, options)) {

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to upload file chunk", e);

}

}

@Override

public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSDownloadByNameOptions options = new GridFSDownloadByNameOptions().revision(0);

try (FileOutputStream outputStream = new FileOutputStream(new File(fileName))) {

for (int i = 0; i < totalChunks; i++) {

try (GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStreamByName(fileName + "_" + i, options)) {

IOUtils.copy(downloadStream, outputStream);

}

}

} catch (Exception e) {

throw new Exception("Failed to merge file", e);

}

}

@Override

public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to upload file", e);

}

}

@Override

public InputStream downloadFile(String fileName) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSDownloadByNameOptions options = new GridFSDownloadByNameOptions().revision(0);

GridFSDownloadStream downloadStream = gridFSBucket.openDownloadStreamByName(fileName, options);

if (downloadStream == null) {

throw new Exception("File not found");

}

return downloadStream;

}

@Override

public void deleteFile(String fileName) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

gridFSBucket.delete(new ObjectId(fileName));

}

@Override

public List listFiles() {

List documents = new ArrayList<>();

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

try (MongoCursor cursor = gridFSBucket.find().iterator()) {

while (cursor.hasNext()) {

GridFSFile file = cursor.next();

Document document = new Document();

document.setName(file.getFilename());

document.setSize(file.getLength());

document.put("storageType", "mongo");

documents.add(document);

}

}

return documents;

}

}

public class FtpFileManagerImpl implements FileManager {

private static final String SERVER_ADDRESS = "localhost";

private static final int SERVER_PORT = 21;

private static final String USERNAME = "username";

private static final String PASSWORD = "password";

private static final String REMOTE_DIRECTORY = "/fileStorage/";

private final FTPClient ftpClient;

public FtpFileManagerImpl() throws Exception {

ftpClient = new FTPClient();

ftpClient.connect(SERVER_ADDRESS, SERVER_PORT);

ftpClient.login(USERNAME, PASSWORD);

ftpClient.enterLocalPassiveMode();

ftpClient.setFileType(FTP.BINARY_FILE_TYPE);

ftpClient.changeWorkingDirectory(REMOTE_DIRECTORY);

}

@Override

public void uploadFileChunk(InputStream inputStream, String fileName, String contentType, int chunkIndex, int totalChunks) throws Exception {

String remoteFileName = fileName + "_" + chunkIndex;

ftpClient.storeFile(remoteFileName, inputStream);

}

@Override

public void mergeFile(String fileName, int chunkSize, int totalChunks) throws Exception {

try (FileOutputStream outputStream = new FileOutputStream(new File(fileName))) {

for (int i = 0; i < totalChunks; i++) {

String remoteFileName = fileName + "_" + i;

ftpClient.retrieveFile(remoteFileName, outputStream);

}

} catch (Exception e) {

throw new Exception("Failed to merge file", e);

}

}

@Override

public void uploadFile(InputStream inputStream, String fileName, String contentType) throws Exception {

ftpClient.storeFile(fileName, inputStream);

}

@Override

public InputStream downloadFile(String fileName) throws Exception {

ByteArrayOutputStream outputStream = new ByteArrayOutputStream();

boolean success = ftpClient.retrieveFile(fileName, outputStream);

if (!success) {

throw new Exception("File not found");

}

return new ByteArrayInputStream(outputStream.toByteArray());

}

@Override

public void deleteFile(String fileName) throws Exception {

boolean success = ftpClient.deleteFile(fileName);

if (!success) {

throw new Exception("File not found");

}

}

@Override

public List listFiles() throws Exception {

FTPFile[] files = ftpClient.listFiles();

List documents = new ArrayList<>();

if (files != null) {

for (FTPFile file : files) {

Document document = new Document();

document.setName(file.getName());

document.setSize(file.getSize());

document.put("storageType", "ftp");

documents.add(document);

}

}

return documents;

}

}

import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.*;

import java.io.ByteArrayInputStream;

import java.io.InputStream;

import java.util.List;

public class FileManagerTest {

private final FileManager fileManager;

public FileManagerTest() throws Exception {

fileManager = new MongoFileManagerImpl();

}

@Test

public void testUploadFile() throws Exception {

String fileName = "test.txt";

String content = "This is a test file";

InputStream inputStream = new ByteArrayInputStream(content.getBytes());

fileManager.uploadFile(inputStream, fileName, "text/plain");

InputStream downloadedInputStream = fileManager.downloadFile(fileName);

byte[] downloadedBytes = downloadedInputStream.readAllBytes();

String downloadedContent = new String(downloadedBytes);

assertEquals(content, downloadedContent);

fileManager.deleteFile(fileName);

}

@Test

public void testUploadFileChunk() throws Exception {

String fileName = "test.txt";

String content = "This is a test file";

int chunkSize = 5;

int totalChunks = (int) Math.ceil((double) content.length() / chunkSize);

InputStream inputStream = new ByteArrayInputStream(content.getBytes());

for (int i = 0; i < totalChunks; i++) {

byte[] chunkBytes = new byte[chunkSize];

int bytesRead = inputStream.read(chunkBytes);

InputStream chunkInputStream = new ByteArrayInputStream(chunkBytes, 0, bytesRead);

fileManager.uploadFileChunk(chunkInputStream, fileName, "text/plain", i, totalChunks);

}

fileManager.mergeFile(fileName, chunkSize, totalChunks);

InputStream downloadedInputStream = fileManager.downloadFile(fileName);

byte[] downloadedBytes = downloadedInputStream.readAllBytes();

String downloadedContent = new String(downloadedBytes);

assertEquals(content, downloadedContent);

fileManager.deleteFile(fileName);

}

@Test

public void testDeleteFile() throws Exception {

String fileName = "test.txt";

String content = "This is a test file";

InputStream inputStream = new ByteArrayInputStream(content.getBytes());

fileManager.uploadFile(inputStream, fileName, "text/plain");

fileManager.deleteFile(fileName);

boolean fileExists = true;

try {

fileManager.downloadFile(fileName);

} catch (Exception e) {

fileExists = false;

}

assertFalse(fileExists);

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(downloadFile(fileName), outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(downloadFile(fileName), outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(downloadFile(fileName), outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file, true)) {

InputStream inputStream = downloadFile(fileName);

inputStream.skip(position);

IOUtils.copy(inputStream, outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(downloadFile(fileName), outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file, true)) {

InputStream inputStream = downloadFile(fileName);

inputStream.skip(position);

IOUtils.copy(inputStream, outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(downloadFile(fileName), outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file, true)) {

InputStream inputStream = downloadFile(fileName);

inputStream.skip(position);

IOUtils.copy(inputStream, outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void continueUploadFile(InputStream inputStream, String fileName, String contentType, long position) throws Exception {

GridFSBucket gridFSBucket = GridFSBuckets.create(collection.getDatabase(), COLLECTION_NAME);

GridFSUploadOptions options = new GridFSUploadOptions()

.chunkSizeBytes(1024 * 1024)

.metadata(new Document("fileName", fileName));

try (GridFSUploadStream uploadStream = gridFSBucket.openUploadStream(fileName, options)) {

uploadStream.setPosition(position);

IOUtils.copy(inputStream, uploadStream);

} catch (Exception e) {

throw new Exception("Failed to continue upload file", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file)) {

IOUtils.copy(downloadFile(fileName), outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Override

public void downloadFileToDisk(String fileName, String localFilePath, long position) throws Exception {

File file = new File(localFilePath);

try (FileOutputStream outputStream = new FileOutputStream(file, true)) {

InputStream inputStream = downloadFile(fileName);

inputStream.skip(position);

IOUtils.copy(inputStream, outputStream);

} catch (Exception e) {

throw new Exception("Failed to download file to disk", e);

}

}

@Test

public void testListFiles() throws Exception {

List documents = fileManager.listFiles();

assertNotNull(documents);

}

}

成果2-示例

ChunkUploadController.java

注:代码全部是由ai自动生成

import io.swagger.annotations.Api;

import io.swagger.annotations.ApiOperation;

import org.springframework.beans.factory.annotation.Autowired;

import org.springframework.http.ResponseEntity;

import org.springframework.web.bind.annotation.PostMapping;

import org.springframework.web.bind.annotation.RequestParam;

import org.springframework.web.bind.annotation.RestController;

import org.springframework.web.multipart.MultipartFile;

import java.io.IOException;

/**

* 这个控制器处理分块文件上传和合并。

*/

@CrossOrigin

@Api(tags = "分块上传")

@RestController

@RequestMapping("/chunk")

public class ChunkUploadController {

@Autowired

private ChunkUploadService chunkUploadService;

/**

* 上传文件的一个分片

* @param file 文件分片

* @param chunkNumber 当前分片编号

* @param totalChunks 总分片数

* @param identifier 文件唯一标识

* @param filename 文件名

* @throws IOException

*/

@ApiOperation(value = "上传文件的一个分片", notes = "上传文件的一个分片以便稍后合并")

@PostMapping("/upload")

public ResponseEntity upload(@RequestParam("file") MultipartFile file,

@RequestParam("chunkNumber") Integer chunkNumber,

@RequestParam("totalChunks") Integer totalChunks,

@RequestParam("identifier") String identifier,

@RequestParam("filename") String filename) throws IOException {

chunkUploadService.upload(file, chunkNumber, totalChunks, identifier, filename);

return ResponseEntity.ok().build();

}

/**

* 合并上传的文件分片

* @param identifier 文件唯一标识

* @param filename 文件名

* @throws IOException

*/

@ApiOperation(value = "合并上传的文件分片", notes = "将所有上传的文件分片合并成一个文件")

@PostMapping("/merge")

public ResponseEntity merge(@RequestParam("identifier") String identifier,

@RequestParam("filename") String filename,

@RequestParam("totalChunks") Integer totalChunks) throws IOException {

chunkUploadService.merge(identifier, filename, totalChunks);

return ResponseEntity.ok().build();

}

}

ChunkUploadService.java

注:代码全部是由ai自动生成

import java.io.IOException;

import java.io.InputStream;

import java.io.OutputStream;

import java.io.UncheckedIOException;

import java.nio.file.Files;

import java.nio.file.Path;

import java.nio.file.Paths;

import java.util.Comparator;

import org.apache.commons.io.IOUtils;

import org.springframework.web.multipart.MultipartFile;

import org.springframework.stereotype.Service;

/**

* 此类提供上传和合并文件块的方法。

*/

@Service

public class ChunkUploadService {

/**

* 上传具有给定参数的文件块。

*

* @param file 要上传的块文件

* @param chunkNumber 正在上传的块的编号

* @param totalChunks 文件分成的总块数

* @param identifier 正在上传的文件的标识符

* @param filename 正在上传的文件的名称

* @throws IOException 如果发生I/O错误

*/

public void upload(MultipartFile file, Integer chunkNumber, Integer totalChunks, String identifier, String filename) throws IOException {

Path chunkPath = Paths.get("uploads", identifier, chunkNumber.toString());

Files.createDirectories(chunkPath.getParent());

if (!Files.exists(chunkPath)) {

Files.write(chunkPath, file.getBytes());

} else {

// 处理块文件已存在的情况

}

}

/**

* 此方法将所有已上传的块合并为给定文件标识符和文件名的单个文件。

* 如果没有上传所有块,则返回而不进行合并。

* @param identifier 正在上传的文件的标识符

* @param filename 正在上传的文件的名称

* @param totalChunks 文件分成的总块数

* @throws IOException 如果发生I/O错误

*/

public void merge(String identifier, String filename, Integer totalChunks) throws IOException {

if (!isUploadComplete(identifier, totalChunks)) {

// 处理未上传所有块的情况

return;

}

Path dirPath = Paths.get("uploads", identifier);

Path filePath = Paths.get("uploads", filename);

try (OutputStream out = Files.newOutputStream(filePath)) {

Files.list(dirPath)

.filter(path -> !Files.isDirectory(path))

.sorted(Comparator.comparingInt(path -> Integer.parseInt(path.getFileName().toString())))

.forEachOrdered(path -> {

try (InputStream in = Files.newInputStream(path)) {

IOUtils.copy(in, out);

} catch (IOException e) {

throw new UncheckedIOException(e);

}

});

}

}

/**

* 此方法检查是否已上传给定文件标识符和总块数的所有块。

* 如果已上传所有块,则返回true。否则,返回false。

* @param identifier 正在上传的文件的标识符

* @param totalChunks 文件分成的总块数

* @return 如果已上传所有块,则为true,否则为false

* @throws IOException 如果发生I/O错误

*/

public boolean isUploadComplete(String identifier, Integer totalChunks) throws IOException {

Path dirPath = Paths.get("uploads", identifier);

long count = Files.list(dirPath)

.filter(path -> !Files.isDirectory(path))

.count();

return count == totalChunks;

}

}

ChunkUpload-1.html

分片上传

注:代码全部是由ai自动生成

Chunk Upload

ChunkUpload-2.html

分片上传,并带进度条(分块进度条和总进度条)

注:代码全部是由ai自动生成

Chunk Upload-带进度条

Cursor常见问题及解决

编写内容不完成,写了部分代码就停了,如何继续向下编写

使用Ctrl + K or Ctrl +L 时,编写内容总是不完整,写到一半就不写了,如何继续向下编写:

#两个命令内容继续向下编写:输入 continue 或 继续

Ctrl + K 输入 继续

使用了Cursor之后一些感受心得汇总

具有不确定性,输入的问题后,返回的结果数据会存在很大差异。哪怕与上次输入一样的问题,结果数据也不一定与上次一样。无法写通篇程序代码,可能存在代码不完整情况,需要多次调试(偶尔使用成本也挺高)。偶尔会提示,使用的人数过多,请稍后再试。生成的代码,很可能不具备通用性,也存在不能直接放到项目中使用,大部分是需要开发手动微调一下,理想情况下是可以减少开发的时间成本。Cursor不支持直接运行代码调试,需要拷贝到自己的IDE中进行调试。

本片文章阅读结束

作者:宇宙小神特别萌

个人博客:www.zhengjiaao.cn个人博客-CSDN:https://blog.csdn.net/qq_41772028?type=lately个人博客-掘金:https://juejin.cn/user/3227821871211390/posts个人博客-简书:https://www.jianshu.com/u/70d69269bd09

代码仓库:

Gitee 仓库:https://gitee.com/zhengjiaaoGithub 仓库:https://github.com/zhengjiaao

描述:喜欢文章的点赞收藏一下,关注不迷路,避免以后找不到哦,大家遇到问题下方可评论

本片文章阅读结束

好文链接

评论可见,请评论后查看内容,谢谢!!!评论后请刷新页面。