Remove Duplicatesfrom Sorted List Algorithm

The Remove Duplicates from Sorted List Algorithm is a widely used approach for eliminating duplicate elements from a sorted list or array. This algorithm is primarily designed for lists or arrays that have already been sorted, as it relies on the inherent order of the elements to function effectively. The primary goal of this algorithm is to modify the input list or array in such a way that each element appears only once, effectively removing all duplicate occurrences. This is useful in scenarios where duplicate elements are not desired or need to be filtered out, such as in database queries, data preprocessing, or optimization tasks. The algorithm typically involves iterating through the sorted list or array, comparing each element with its adjacent element. If the current element is found to be equal to the next element, it indicates a duplicate, and the duplicate element is removed from the list or array. This process continues until the entire list or array is traversed, resulting in a list or array with all duplicate elements removed. The time complexity of this algorithm is O(n), as it requires a single pass through the input data structure. This makes the algorithm highly efficient and suitable for large-scale data processing tasks.
/**
 * Definition for singly-linked list.
 * struct ListNode {
 *     int val;
 *     ListNode *next;
 *     ListNode(int x) : val(x), next(NULL) {}
 * };
 */
class Solution {
public:
    ListNode *deleteDuplicates(ListNode *head) {
        if (head == NULL) {
            return NULL;
        }
        ListNode* prev = head;
        ListNode* curr = head->next;
        while (curr) {
            if (prev->val != curr->val) {
                prev->next = curr;
                prev = prev->next;
            }
            curr = curr->next;
        }
        prev->next = NULL;
        return head;
    }
};

LANGUAGE:

DARK MODE: