Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cannot submit tasks to master #83

Open
rilakgg opened this issue Mar 29, 2022 · 4 comments
Open

[BUG] Cannot submit tasks to master #83

rilakgg opened this issue Mar 29, 2022 · 4 comments
Assignees

Comments

@rilakgg
Copy link

rilakgg commented Mar 29, 2022

Hi,
I met an issue. Anyone can help? Thanks in advance.

After deploy spark-standalone-cluster-on-docker (images: andreper/spark-master:3.0.0) on a server (192.XX.X.X), I try to test by another PC (192.XX.X.Y).
cmd steps:
$ spark-shell --master spark://192.XX.X.X:7077

val count = sc.parallelize(1 to 1000).filter { _ =>
  val x = math.random
  val y = math.random
  x*x + y*y < 1
}.count()

I got below error.(infinite loop messages)

WARN TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

Build Env.

  • Images: andreper/spark-master:3.0.0
  • Docker Engine version: 20.10.14
  • Docker Compose version: Docker Compose version v2.2.3
@Estreuselito
Copy link

@rilakgg I am facing the same issue. Did you find a solution for that problem?

@ThomasMannKenbun
Copy link

@Estreuselito @rilakgg

This is not a bug.

You have another Spark Application running on this cluster already (e.g. a Spark Session created in Jupyter Lab). Close this Session/Spark Application.

In a Standalone Cluster default behavior for a Spark Application is to "grav" all ressources. If this is a permanent Session, it will block any other application from correctly getting any ressource.

@mrn-aglic
Copy link

@ThomasMannKenbun is there a way around this?

@mrn-aglic
Copy link

@ThomasMannKenbun I think it would work if jupyter was run on the same container as spark master and starting pyspark with the --master local[*] option. Tested on Iceberg image. The app isn't listed on localhost:7077.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants