.
.
.
.
.
Gradient descent methods do not always converge to the same point. The convergence behavior of gradient descent depends on several factors, such as the shape of the objective function, the choice of the learning rate, and the starting point.
For convex functions with a single global minimum, gradient descent typically converges to the same point regardless of the initial starting point.
For non-convex functions, which may have multiple local minima or saddle points, gradient descent can converge to different solutions depending on the initial conditions and the nature of the function. In such cases, the algorithm might converge to a local minimum or a saddle point, rather than the global minimum.

Just to add on it - In practical applications like deep learning, multiple runs of gradient descent with different initializations are often used to find a good solution, even if it's not guaranteed to be the global optimum.