Array databases are well suited for complex multidimensional analysis. However, array queries are often performance constrained by the high computational demands of the underlying algorithms. We explore the use of GPU to accelerate these algorithms and study its end-to-end effects on performance, power, and energy efficiency. We have extended SciDB, a popular array database, to use GPUs and improved its query performance by 1.5X to 11X. While GPUs improve both performance and energy efficiency, multiple design issues limit us from reaching the touted 100X performance benefits of GPUs. We provide detailed experimental analysis to understand these bottlenecks related to array partitioning, load imbalance, and CPU-GPU hybrid execution.