admin管理员组文章数量:1434380
I wrote a DataFrame into a Delta table (e.g., demo_table) using the overwrite mode, which involves dropping the table beforehand. After the write operation was successful, I executed the OPTIMIZE command on the table. However, the OPTIMIZE operation took nearly an hour to complete. How can I improve this process?
Note : The table is in a partitioned format. Command : OPTIMIZE schema.demo_table ZORDER BY (custom_id,sales_date) Note : custom_id : Generated new columns , when we create final df final record count would be 3 million records. Not a wider table. Schema have basic data types . integer,string. there is no complex data types. Observation : when i use existing column in Zorder , it executed within 5 min.
本文标签: apache sparkDatabricks OptimizeZorder command is taking too longStack Overflow
版权声明:本文标题:apache spark - Databricks Optimize - Zorder command is taking too long - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1745614734a2666328.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论