My Experience With Cursor

#tinker-tales #ai

2025-04-14

Hello, my dear reader. Once again I am writing an article of how I overcame my reluctance towards AI and my attempt to embrace it more, by integrating it in my workflow.
If like me you have been living under a rock for the past few month, this is Cursor.
From what I can understand, it is a fork of VS Code, even allowing me to import all my settings and extensions, and integrating it with an LLM service that reads from embeddings derived from my project structure.

The goal for me in this article is to identify in what forms the AI support can enhance my workflow, without losing control or feeling redundant. I pride my self on my self-acquired knowledge of programming and technology and my unique perspective on it as a formally trained media designer and creative technologist. I am not a computer scientist, nor have I studied it, by I grew to love programming as a profession, hobby and passion. I don't just see it as a means to get a paycheck, but a tool to express my thoughts, how I structure my ideas and execute my dreams. And I have been doing so for almost the past 10 years.

I think my biggest strength is to create adaptable and extendable code, that expresses concepts of my core domain clearly and comprehensibly. Design. I don't want to adjust my thought process to the medium, but I shape the medium to reflect my thought process.
For that reason I will still be conceptualizing and planning the architecture, all the way down to the algorithm that solves individual problems to implement the features.

Taking care of Chores

I often have such a clear conceptual wireframe in my head, that implementing it is just a matter of navigating data structures, often defined in some external library and to find the right API call. In other words, a chore. Even without AI I would have to read documentation, find what I need and simply paste it into my code. There is no room for personal ingenuity.
No problem solving. Don't get me wrong, I will still read a bit of documentation especially the parts that talk about the conceptualization of the framework, to understand what I am dealing with and how they solve the problem. The main theme, is to stay in control and getting enhanced!

Leading by Example

Another form I would like to try out the assistant, is by showing it code I have written to implement one use case, as an example/model for it to try to reproduce it, applied to a different use case. Once again a similar rule applies to the one before, the example that serves as model comes from my own invention, I simply make the AI imitate my solution.

Now one could say, why do you have to reapply a model to a new use case?
Aren't you repeating yourself? Don't you know about abstraction? Brother, I breath abstraction! So let me clarify the exact scenarios I am referring to.

For a project I am working on, I am trying to extract functions and method type information and other metadata to be accessible at runtime in electron, as nodes for a workflow system. To accomplish that I parse the source code as AST, and extract every expression const a = a1 => b1 and classic function definition function(){} that have been marked with a @node comment. And as I said then it is a matter of traversing some under-documented open source APIs from a library to piece everything together. I know what my context is, I know what I need it to do. Just not the exact structure and names.

    const normalFunctions = file.getFunctions().filter(isNode);
    const expressionFunctions = file.getVariableDeclarations()
        .filter(isNode)
        .filter(vd => vd.getInitializer()?.getKind() === SyntaxKind.ArrowFunction);

This is the implementation I used, written myself by doing the chore manually. For example in the case of this API the name can only be extracted by a method call. Most information resides in some abstraction that you get with .getInitializer.

const reflectFunction = fn => addCommonFeatures(fn, {
    name: fn.getName(),
    parameters: fn.getParameters().map(mapTypes),
    return: fn.getReturnType().getText()
})

  

const reflectExpression = (ex) => {
    const declaration = ex.getVariableStatement().getDeclarations()[0];
    const name = declaration.getName();
    const arrowFunction = declaration.getInitializer()
    const parameters = arrowFunction.getParameters();
    return addCommonFeatures(ex,
        {name, parameters: parameters.map(mapTypes), return: arrowFunction.getReturnType().getText()});
}

I even used some abstraction by parametrization to build a common denominator.

const addCommonFeatures = (fn, struc) => {
    return ({...struc, alias: getAliases(fn)});
}

I had to this point not written the case for methods yet. I knew that the process would be very similar, just that that the API calls for the methods would alter substantially. So no chance for abstraction. Also this is a good example for an instance where abstraction is not only not needed, but wrong. Functions and methods are two totally different concepts and the way to extract and process them belong to two different responsibilities in the code.
Younger me would have probably tried to find a common pattern, coupling the code unnecessarily.

So I gave the AI the prompt

Under /build/codegen/index.ts I have created an AST parser to extract types at build time, to be read at runtime.  
Can you please write one more procedure that extracts the class name and the method name and their type (like I did with functions and expressions) of all the methods that have the "@node" comment next to them?

And it produced this

const reflectMethod = (method) => {
    const classDeclaration = method.getParent().getParent();
    return addCommonFeatures(method, {
        className: classDeclaration.getName(),
        methodName: method.getName(),
        parameters: method.getParameters().map(mapTypes),
        return: method.getReturnType().getText()
    });
}

...

const classMethods = file.getClasses()
        .flatMap(cls => cls.getMethods())
        .filter(isNode);
...
methods: classMethods.map(reflectMethod)
    })
}).filter(file => file.functions.length > 0 || file.methods.length > 0);

Here is what I like. Without explicitly telling it, it recognized my own coding conventions and imitated them. It stuck to my conceptual framework. This is exactly the result I was expecting. I am happy. Now let's test it!

npm run build:codegen

Then I realized, wait a moment. I don't have the test suite yet for my new feature.
So let's add that as well!

reflection.test.js contains an automated test suite over all function definitions. Can you prepare the same for my class methods, based on your implementation of codegen/index.ts?

First of all it could not identify codegen/index.ts eventhough it just worked on it.

Cursor_5Kkvh6vE7j.png

If you read through the response, you can clearly see, that it lost any context from it's previous task. This is good to know. Also previously it correctly extended the new feature inside the existing file. This time it created it's own file. Let's modify the prompt.

under /test/reflection.test.js you can find a jest test suite that checks for completeness of function signatures.  
Under /build/codegen/index.ts you can find an implementation that extracts this information from the AST of the source code. Based on the implementation for `methods` can you add a new series of tests inside reflection.test.js?

And this time, hurray. It did exactly what I had in mind!

describe(`Test reflected code for method signatures`, () => {
    for(data of reflected_data){
        if(data.methods.length){
            test(`testing file ${data.path} for methods`, () => {
                expect(data).toBeTruthy();
            })
            test.each(data.methods.map(m => [m.className + '.' + m.methodName, m]))(`testing method %s`, (_, method) => {
                // Test basic structure
                expect(method.className).toBeTruthy();
                expect(method.methodName).toBeTruthy();
                // Test parameters
                expect(method.parameters).toBeTruthy();
                for (param of method.parameters) {
                    expect(param.name).toBeTruthy();
                    expect(param.type).toBeTruthy();
                }
                // Test return type
                expect(method.return).toBeTruthy();
                // Test aliases if present
                if (method.alias) {
                    expect(Array.isArray(method.alias)).toBeTruthy();
                    method.alias.forEach(alias => {
                        expect(typeof alias).toBe('string');
                    });
                }
            })
        } else {
            console.log(`No methods found in file ${data.path}`);
        }
    }

})

Now the last step is to actually define a method in a class as something we want to reflect as a node. While this is so trivial, that writing a detailed prompt for it for the ai to do, takes more time then doing it manually, I wanted to test the accuracy of the agent. To protect the intellectual property of my team's architecture I will replace path names and class names with wildCards.

Looking at /**/*.ts can you read the usage of the comment `@node` and infer it's syntax and then under /**/*.ts identify the method `run` of the class `****` and apply such a comment to mark it as a node?

It did it, even inferring some context from our class definition for additional metadata in the node definition. However, it did it on the first row after the opening of the curly braces.
While I need it at the end of the definition. This is however my fault, because the provided example was not multiple lines but an inline expression. Therefor there was a syntactical ambiguity for the AI to consider.
So I add this to the prompt.

Please append the comment at the end of the definition of the method.

And now it did it correctly!

Let's test it.

And we have the first errors. As I expected.

className: classDeclaration.getName(),
                                    ^
TypeError: classDeclaration.getName is not a function

It seems, that instead of navigating the API it solely relied on my implementation as example, and applied that. So let me specify to it the exact library I am using and explicitly tell it to look it up. (I show the results in the next section)

Let's finish this feature by committing the changes. Again, writing the prompt is more work than it is worth. It is just a test.

Please commit the current changes following the `git log` commit message convention. The core of the message is extending the type reflection with methods.

I was staring 5 min. at the first step of the agent running git log --oneline | cat waiting for it to finish. When I saw that if was waiting for a confirmation to run this command. Very interesting. Again a layer of control you as the user have over the actions of the AI. Also I realized that it assumes I am on Unix, while I am on windows.

After chewing through a wall of errors, I think it also realized it's mistake and tried some more general git commands. But I think without catting the stdout it is unable to access it.

Please be aware that my OS is windows and my environment shell is Powershell.

For some reason when running simple commands such as git add . it trips up and get's stuck in processing it forever. So it seems that the operation side of the workflow can't be handled so efficiently by it.

Sherlock and Watson

One more thing I want to add to my list. I also love to interpret error messages, investigate the source and understand the cause of the problem. It is like solving a puzzle, or investigating a crime. Once identified however, implementing the fix then falls again into the chore category. Arresting the criminal and processing the paperwork. A job for Watson, I mean cursor once again. So I would like to test this by helping the AI to correct it. Giving it my assumption and telling it what to correct.

I have encountered a TypeError when running /build/codegen/index.ts. The error is inside the reflectMethod implementation. I assume that the API calls are wrong. Can you please read the implementation understanding the context and goal of the function, then read the API docs of ts-morph and correct the implementation using the correct structure and API calls?

And, it fixed it. Now my parsing and test-suite returned without any error.
Turns out it's previous implementation did not account for every possible method to class relationship turning my specific case into undefined.

It added an exception handler.

if (!classDeclaration || !classDeclaration.isKind(SyntaxKind.ClassDeclaration)) {
        throw new Error(`Method ${method.getName()} is not part of a class declaration`);
    }

Coach and Peer Reviewer

One thing that blew my mind and which increases it's value over external LLMs is it's awareness of the repository structure and content. So I did yet another experiment and asked it to break down my architecture starting on one folder.

So I switch the AI mode from Agent to Ask.

Can you tell me your thoughts of the architecture found in /src/main/WorkflowSystem ?
...
How do you evaluate the architecture design of what you just analyzed?

Overall Assessment:

This is a well-designed system that follows solid architectural principles. The separation of concerns is particularly impressive, and the system seems built to handle complex workflows while maintaining stability and performance. The architecture shows careful consideration for future extensibility and maintainability.

The trade-off between complexity and functionality seems appropriate given what appears to be a complex domain problem. The use of modern patterns (like adapters, event emitters, and worker processes) shows forward-thinking design.

Score: 8.5/10 - A robust, well-thought-out architecture with room for some minor improvements in documentation and testing infrastructure.

Recommendations:

  1. Consider adding more comprehensive documentation about component interactions
  2. Add more explicit testing interfaces for complex async operations
  3. Consider implementing a more detailed error recovery strategy
  4. Maybe add monitoring/observability hooks for production debugging
  5. Consider adding a facade pattern to simplify the API surface for common operations

The architecture is particularly strong in its ability to evolve and adapt to new requirements while maintaining stability and performance, which are crucial qualities for a workflow system.

Aww, thanks AI :)

And it also identified the main component that is missing, which is an error-recovering strategy, something that will be added later once everything runs as intended. And I will have a look into the Facade pattern, and weither that is good advice.

Reminds me a bit of git porcelain and git plumbing commands. We only give access to the simplified API. But I think the AI in this case misunderstood the context of my app. This is not a library, but an integral part of the system. The workflow system itself is integrated into the main process through an IPCManager which handles and encapsulates the communication with the Frontend.

About your recommendation (5.). Is the facade pattern really a good design choice? Have you considered the full context of the WorkflowSystem and how it is integrated in the app? It is itself initialized and passed to an IPCManager defining a clear message protocol to the Frontend. Please review your recommendation based on this amendment.

Ah, you make an excellent point - I apologize for that recommendation without fully considering the existing system context. Let me revise my thinking:
Looking back at the code, particularly how the WorkflowSystem is integrated:

  1. The WorkflowSystem is already effectively serving as a high-level interface through the IPC protocol
  2. The IPC message protocol (WorkflowIPCMessage) provides a well-defined communication contract
  3. The system already has a clean integration point through the IPCManager, which handles the communication between frontend and the WorkflowSystem
    You're absolutely right - adding a facade pattern would be redundant and potentially harmful.

This is a good reminder that architectural patterns should be recommended only after fully understanding the existing system context and integration points. Thank you for the correction - it helped me better appreciate the existing design decisions.

You are very welcome :)

Refactoring Machine

Again. Refactoring has two components. On one side the conceptual component, which has to do with remodeling, extracting, identifying abstraction or structure. On the other side refactoring often involves doing very repetitive tasks: renaming, copy and pasting.

I have one such case. Initially I was not expecting many cases inside a function and modeled it as a if else and now I need to change it to a switch. Let's try it. This is also a good application of the manual mode of the AI. I select the code

if(typed_operators.some(to => to.id == item.id)){
        return (
            {
                id: `${item.id}_${id}`,
                type: "operator",
                position: newNodePosition!,
                data: { binding: typed_operators.find(to => to.id == item.id)},
            }
        )
    } else if (item.id == "preview"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "preview",
                position: newNodePosition!,
                data: { label: item.name, break: false },
            }
        )
    } else if (item.id == "action"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "preview",
                position: newNodePosition!,
                data: { status: {} }
        )
    } else if (item.id == "load"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "loader",
                position: newNodePosition!,
                data: {type: "file"},
            }
        )
    } else if (item.id == "export"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "exporter",
                position: newNodePosition!,
                data: { },
            }
        )
    } else if (item.id == "iterator"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "exporter",
                position: newNodePosition!,
                data: { },
            }
        )
    } else if (item.id == "composer"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "exporter",
                position: newNodePosition!,
                data: { },
            }
        )
    } else if (item.id == "decomposer"){
        return (
            {
                id: `${item.id}_${id}`,
                type: "exporter",
                position: newNodePosition!,
                data: { },
            }
        )
    }  else if (Object.keys(mockups).includes(item.id)){
        return (
            {
                id: `${item.id}_${id}`,
                type: item.id,
                position: newNodePosition!,
                data: { label: item.name },
            }
        )
    } else {
        return (
            {
                id: `${item.id}_${id}`,
                type: item.id,
                position: newNodePosition!,
                data: {},
            }
        )
    }

As we can see it is not a clear switch case, it is a mix between a switch, some condition and a default.

Refactor the following collection of `if else` so that the hardcoded id checks happen inside a switch returning the respective object, the default case breaks without returning, and the remaining cases are handled by the current if else implementation.

Cursor_ujd9tNlKOf.png

In red are what has been removed and in green what has been added. At first I was a bit disappointed of its solution, more specifically: wedging the switch statement in between the mockup if-clause and the operator if-clause. But after observing it for a while I decided to embrace it, and I even saw a pattern which I confirmed with a comment.

// Reflected Nodes

    if(typed_operators.some(to => to.id == item.id)){
        return {
            id: `${item.id}_${id}`,
            type: "operator",
            position: newNodePosition!,
            data: { binding: typed_operators.find(to => to.id == item.id)},
        }
    }

    // Specialized Nodes

    switch (item.id) {
        case "preview":
            return {
                id: `${item.id}_${id}`,
                type: "preview",
                position: newNodePosition!,
                data: { label: item.name, break: false },
            }

        case "action":
            return {
                id: `${item.id}_${id}`,
                type: "preview",
                position: newNodePosition!,
                data: { status: {} }, // TODO: Add reflected action info
            }

        case "load":
            return {
                id: `${item.id}_${id}`,
                type: "loader",
                position: newNodePosition!,
                data: {type: "file"},
            }

        case "export":
        case "iterator":
        case "composer":
        case "decomposer":

            return {
                id: `${item.id}_${id}`,
                type: "exporter",
                position: newNodePosition!,
                data: { },
            }

        default:
            break;
    }
  
    // Mockups

    if (Object.keys(mockups).includes(item.id)){
        return {
            id: `${item.id}_${id}`,
            type: item.id,
            position: newNodePosition!,
            data: { label: item.name },
        }
    }

    // Empty

    return {
        id: `${item.id}_${id}`,
        type: item.id,
        position: newNodePosition!,
        data: {},
    }

To be honest this reminds me the most of other AI Workflows I am used to. You use the AI to do all the cumbersome work necessary for a new variation (like refactoring) just to try it out. And if you are happy with the overall result, you can make some final adjustment before committing fully to the solution. Just imagine, having to do all this work yourself, just to realize, that you don't like it.

API Lookup

Tips

  • Other than in VS Code the exceptions (uncaught) are not set as breakpoint by default, you need to activate them. I found out while waiting for my debugger to launch the app indefinitely. I first thought it was a problem with Cursor.